Test Report: KVM_Linux_crio 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-19:35410
                    
                

Test fail (10/221)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-513705 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-513705 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.954877288s)

                                                
                                                
-- stdout --
	* [addons-513705] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-513705" primary control-plane node in "addons-513705" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	  - Using image docker.io/busybox:stable
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-513705 service yakd-dashboard -n yakd-dashboard
	
	* Verifying ingress addon...
	* Verifying registry addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying csi-hostpath-driver addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-513705 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: nvidia-device-plugin, ingress-dns, default-storageclass, inspektor-gadget, metrics-server, storage-provisioner, helm-tiller, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 03:38:16.718157  131185 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:38:16.718408  131185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:38:16.718416  131185 out.go:304] Setting ErrFile to fd 2...
	I0719 03:38:16.718421  131185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:38:16.718577  131185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 03:38:16.719153  131185 out.go:298] Setting JSON to false
	I0719 03:38:16.719995  131185 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4840,"bootTime":1721355457,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:38:16.720048  131185 start.go:139] virtualization: kvm guest
	I0719 03:38:16.722045  131185 out.go:177] * [addons-513705] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 03:38:16.723202  131185 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:38:16.723253  131185 notify.go:220] Checking for updates...
	I0719 03:38:16.725418  131185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:38:16.726608  131185 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 03:38:16.727663  131185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 03:38:16.728653  131185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 03:38:16.729813  131185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:38:16.730992  131185 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:38:16.761945  131185 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 03:38:16.763140  131185 start.go:297] selected driver: kvm2
	I0719 03:38:16.763161  131185 start.go:901] validating driver "kvm2" against <nil>
	I0719 03:38:16.763173  131185 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:38:16.763861  131185 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:38:16.763940  131185 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 03:38:16.781370  131185 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 03:38:16.781436  131185 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:38:16.781631  131185 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 03:38:16.781659  131185 cni.go:84] Creating CNI manager for ""
	I0719 03:38:16.781667  131185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 03:38:16.781676  131185 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 03:38:16.781722  131185 start.go:340] cluster config:
	{Name:addons-513705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-513705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:38:16.781800  131185 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:38:16.783551  131185 out.go:177] * Starting "addons-513705" primary control-plane node in "addons-513705" cluster
	I0719 03:38:16.784728  131185 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 03:38:16.784754  131185 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 03:38:16.784765  131185 cache.go:56] Caching tarball of preloaded images
	I0719 03:38:16.784836  131185 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 03:38:16.784846  131185 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 03:38:16.785163  131185 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/config.json ...
	I0719 03:38:16.785186  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/config.json: {Name:mkf573db91f9f21124bd56c7bcd36ba926ba3616 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:16.785317  131185 start.go:360] acquireMachinesLock for addons-513705: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 03:38:16.785359  131185 start.go:364] duration metric: took 29.763µs to acquireMachinesLock for "addons-513705"
	I0719 03:38:16.785379  131185 start.go:93] Provisioning new machine with config: &{Name:addons-513705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-513705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 03:38:16.785429  131185 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 03:38:16.786937  131185 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 03:38:16.787053  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:38:16.787078  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:38:16.801313  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41627
	I0719 03:38:16.801738  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:38:16.802235  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:38:16.802256  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:38:16.802576  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:38:16.802777  131185 main.go:141] libmachine: (addons-513705) Calling .GetMachineName
	I0719 03:38:16.802914  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:38:16.803060  131185 start.go:159] libmachine.API.Create for "addons-513705" (driver="kvm2")
	I0719 03:38:16.803089  131185 client.go:168] LocalClient.Create starting
	I0719 03:38:16.803145  131185 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem
	I0719 03:38:16.925496  131185 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem
	I0719 03:38:17.083335  131185 main.go:141] libmachine: Running pre-create checks...
	I0719 03:38:17.083362  131185 main.go:141] libmachine: (addons-513705) Calling .PreCreateCheck
	I0719 03:38:17.083884  131185 main.go:141] libmachine: (addons-513705) Calling .GetConfigRaw
	I0719 03:38:17.084340  131185 main.go:141] libmachine: Creating machine...
	I0719 03:38:17.084355  131185 main.go:141] libmachine: (addons-513705) Calling .Create
	I0719 03:38:17.084525  131185 main.go:141] libmachine: (addons-513705) Creating KVM machine...
	I0719 03:38:17.085933  131185 main.go:141] libmachine: (addons-513705) DBG | found existing default KVM network
	I0719 03:38:17.087136  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:17.086903  131207 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012fad0}
	I0719 03:38:17.087188  131185 main.go:141] libmachine: (addons-513705) DBG | created network xml: 
	I0719 03:38:17.087206  131185 main.go:141] libmachine: (addons-513705) DBG | <network>
	I0719 03:38:17.087212  131185 main.go:141] libmachine: (addons-513705) DBG |   <name>mk-addons-513705</name>
	I0719 03:38:17.087218  131185 main.go:141] libmachine: (addons-513705) DBG |   <dns enable='no'/>
	I0719 03:38:17.087224  131185 main.go:141] libmachine: (addons-513705) DBG |   
	I0719 03:38:17.087233  131185 main.go:141] libmachine: (addons-513705) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 03:38:17.087241  131185 main.go:141] libmachine: (addons-513705) DBG |     <dhcp>
	I0719 03:38:17.087247  131185 main.go:141] libmachine: (addons-513705) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 03:38:17.087254  131185 main.go:141] libmachine: (addons-513705) DBG |     </dhcp>
	I0719 03:38:17.087260  131185 main.go:141] libmachine: (addons-513705) DBG |   </ip>
	I0719 03:38:17.087266  131185 main.go:141] libmachine: (addons-513705) DBG |   
	I0719 03:38:17.087271  131185 main.go:141] libmachine: (addons-513705) DBG | </network>
	I0719 03:38:17.087279  131185 main.go:141] libmachine: (addons-513705) DBG | 
	I0719 03:38:17.092533  131185 main.go:141] libmachine: (addons-513705) DBG | trying to create private KVM network mk-addons-513705 192.168.39.0/24...
	I0719 03:38:17.168454  131185 main.go:141] libmachine: (addons-513705) DBG | private KVM network mk-addons-513705 192.168.39.0/24 created
	I0719 03:38:17.168494  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:17.168366  131207 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 03:38:17.168505  131185 main.go:141] libmachine: (addons-513705) Setting up store path in /home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705 ...
	I0719 03:38:17.168519  131185 main.go:141] libmachine: (addons-513705) Building disk image from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 03:38:17.168529  131185 main.go:141] libmachine: (addons-513705) Downloading /home/jenkins/minikube-integration/19302-122995/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 03:38:17.429124  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:17.428956  131207 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa...
	I0719 03:38:17.718347  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:17.718202  131207 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/addons-513705.rawdisk...
	I0719 03:38:17.718382  131185 main.go:141] libmachine: (addons-513705) DBG | Writing magic tar header
	I0719 03:38:17.718396  131185 main.go:141] libmachine: (addons-513705) DBG | Writing SSH key tar header
	I0719 03:38:17.718408  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:17.718324  131207 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705 ...
	I0719 03:38:17.718496  131185 main.go:141] libmachine: (addons-513705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705
	I0719 03:38:17.718530  131185 main.go:141] libmachine: (addons-513705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines
	I0719 03:38:17.718541  131185 main.go:141] libmachine: (addons-513705) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705 (perms=drwx------)
	I0719 03:38:17.718551  131185 main.go:141] libmachine: (addons-513705) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines (perms=drwxr-xr-x)
	I0719 03:38:17.718560  131185 main.go:141] libmachine: (addons-513705) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube (perms=drwxr-xr-x)
	I0719 03:38:17.718575  131185 main.go:141] libmachine: (addons-513705) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995 (perms=drwxrwxr-x)
	I0719 03:38:17.718589  131185 main.go:141] libmachine: (addons-513705) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 03:38:17.718605  131185 main.go:141] libmachine: (addons-513705) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 03:38:17.718621  131185 main.go:141] libmachine: (addons-513705) Creating domain...
	I0719 03:38:17.718634  131185 main.go:141] libmachine: (addons-513705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 03:38:17.718649  131185 main.go:141] libmachine: (addons-513705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995
	I0719 03:38:17.718661  131185 main.go:141] libmachine: (addons-513705) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 03:38:17.718677  131185 main.go:141] libmachine: (addons-513705) DBG | Checking permissions on dir: /home/jenkins
	I0719 03:38:17.718689  131185 main.go:141] libmachine: (addons-513705) DBG | Checking permissions on dir: /home
	I0719 03:38:17.718703  131185 main.go:141] libmachine: (addons-513705) DBG | Skipping /home - not owner
	I0719 03:38:17.719631  131185 main.go:141] libmachine: (addons-513705) define libvirt domain using xml: 
	I0719 03:38:17.719645  131185 main.go:141] libmachine: (addons-513705) <domain type='kvm'>
	I0719 03:38:17.719675  131185 main.go:141] libmachine: (addons-513705)   <name>addons-513705</name>
	I0719 03:38:17.719700  131185 main.go:141] libmachine: (addons-513705)   <memory unit='MiB'>4000</memory>
	I0719 03:38:17.719713  131185 main.go:141] libmachine: (addons-513705)   <vcpu>2</vcpu>
	I0719 03:38:17.719722  131185 main.go:141] libmachine: (addons-513705)   <features>
	I0719 03:38:17.719731  131185 main.go:141] libmachine: (addons-513705)     <acpi/>
	I0719 03:38:17.719738  131185 main.go:141] libmachine: (addons-513705)     <apic/>
	I0719 03:38:17.719743  131185 main.go:141] libmachine: (addons-513705)     <pae/>
	I0719 03:38:17.719747  131185 main.go:141] libmachine: (addons-513705)     
	I0719 03:38:17.719753  131185 main.go:141] libmachine: (addons-513705)   </features>
	I0719 03:38:17.719760  131185 main.go:141] libmachine: (addons-513705)   <cpu mode='host-passthrough'>
	I0719 03:38:17.719764  131185 main.go:141] libmachine: (addons-513705)   
	I0719 03:38:17.719778  131185 main.go:141] libmachine: (addons-513705)   </cpu>
	I0719 03:38:17.719790  131185 main.go:141] libmachine: (addons-513705)   <os>
	I0719 03:38:17.719801  131185 main.go:141] libmachine: (addons-513705)     <type>hvm</type>
	I0719 03:38:17.719812  131185 main.go:141] libmachine: (addons-513705)     <boot dev='cdrom'/>
	I0719 03:38:17.719822  131185 main.go:141] libmachine: (addons-513705)     <boot dev='hd'/>
	I0719 03:38:17.719834  131185 main.go:141] libmachine: (addons-513705)     <bootmenu enable='no'/>
	I0719 03:38:17.719844  131185 main.go:141] libmachine: (addons-513705)   </os>
	I0719 03:38:17.719852  131185 main.go:141] libmachine: (addons-513705)   <devices>
	I0719 03:38:17.719857  131185 main.go:141] libmachine: (addons-513705)     <disk type='file' device='cdrom'>
	I0719 03:38:17.719869  131185 main.go:141] libmachine: (addons-513705)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/boot2docker.iso'/>
	I0719 03:38:17.719881  131185 main.go:141] libmachine: (addons-513705)       <target dev='hdc' bus='scsi'/>
	I0719 03:38:17.719892  131185 main.go:141] libmachine: (addons-513705)       <readonly/>
	I0719 03:38:17.719903  131185 main.go:141] libmachine: (addons-513705)     </disk>
	I0719 03:38:17.719941  131185 main.go:141] libmachine: (addons-513705)     <disk type='file' device='disk'>
	I0719 03:38:17.719967  131185 main.go:141] libmachine: (addons-513705)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 03:38:17.719977  131185 main.go:141] libmachine: (addons-513705)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/addons-513705.rawdisk'/>
	I0719 03:38:17.719984  131185 main.go:141] libmachine: (addons-513705)       <target dev='hda' bus='virtio'/>
	I0719 03:38:17.719990  131185 main.go:141] libmachine: (addons-513705)     </disk>
	I0719 03:38:17.719997  131185 main.go:141] libmachine: (addons-513705)     <interface type='network'>
	I0719 03:38:17.720004  131185 main.go:141] libmachine: (addons-513705)       <source network='mk-addons-513705'/>
	I0719 03:38:17.720010  131185 main.go:141] libmachine: (addons-513705)       <model type='virtio'/>
	I0719 03:38:17.720016  131185 main.go:141] libmachine: (addons-513705)     </interface>
	I0719 03:38:17.720022  131185 main.go:141] libmachine: (addons-513705)     <interface type='network'>
	I0719 03:38:17.720028  131185 main.go:141] libmachine: (addons-513705)       <source network='default'/>
	I0719 03:38:17.720037  131185 main.go:141] libmachine: (addons-513705)       <model type='virtio'/>
	I0719 03:38:17.720058  131185 main.go:141] libmachine: (addons-513705)     </interface>
	I0719 03:38:17.720078  131185 main.go:141] libmachine: (addons-513705)     <serial type='pty'>
	I0719 03:38:17.720086  131185 main.go:141] libmachine: (addons-513705)       <target port='0'/>
	I0719 03:38:17.720095  131185 main.go:141] libmachine: (addons-513705)     </serial>
	I0719 03:38:17.720109  131185 main.go:141] libmachine: (addons-513705)     <console type='pty'>
	I0719 03:38:17.720122  131185 main.go:141] libmachine: (addons-513705)       <target type='serial' port='0'/>
	I0719 03:38:17.720133  131185 main.go:141] libmachine: (addons-513705)     </console>
	I0719 03:38:17.720144  131185 main.go:141] libmachine: (addons-513705)     <rng model='virtio'>
	I0719 03:38:17.720157  131185 main.go:141] libmachine: (addons-513705)       <backend model='random'>/dev/random</backend>
	I0719 03:38:17.720167  131185 main.go:141] libmachine: (addons-513705)     </rng>
	I0719 03:38:17.720178  131185 main.go:141] libmachine: (addons-513705)     
	I0719 03:38:17.720191  131185 main.go:141] libmachine: (addons-513705)     
	I0719 03:38:17.720201  131185 main.go:141] libmachine: (addons-513705)   </devices>
	I0719 03:38:17.720211  131185 main.go:141] libmachine: (addons-513705) </domain>
	I0719 03:38:17.720223  131185 main.go:141] libmachine: (addons-513705) 
	I0719 03:38:17.769263  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:b3:8d:ec in network default
	I0719 03:38:17.769972  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:17.769988  131185 main.go:141] libmachine: (addons-513705) Ensuring networks are active...
	I0719 03:38:17.770680  131185 main.go:141] libmachine: (addons-513705) Ensuring network default is active
	I0719 03:38:17.770992  131185 main.go:141] libmachine: (addons-513705) Ensuring network mk-addons-513705 is active
	I0719 03:38:17.772521  131185 main.go:141] libmachine: (addons-513705) Getting domain xml...
	I0719 03:38:17.773128  131185 main.go:141] libmachine: (addons-513705) Creating domain...
	I0719 03:38:19.322276  131185 main.go:141] libmachine: (addons-513705) Waiting to get IP...
	I0719 03:38:19.323062  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:19.323426  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:19.323464  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:19.323425  131207 retry.go:31] will retry after 200.661281ms: waiting for machine to come up
	I0719 03:38:19.525719  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:19.526134  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:19.526157  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:19.526084  131207 retry.go:31] will retry after 366.733679ms: waiting for machine to come up
	I0719 03:38:19.894554  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:19.895006  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:19.895030  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:19.894960  131207 retry.go:31] will retry after 463.136616ms: waiting for machine to come up
	I0719 03:38:20.359553  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:20.359904  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:20.359930  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:20.359844  131207 retry.go:31] will retry after 367.78707ms: waiting for machine to come up
	I0719 03:38:20.729492  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:20.729903  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:20.729924  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:20.729880  131207 retry.go:31] will retry after 562.70237ms: waiting for machine to come up
	I0719 03:38:21.294580  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:21.294941  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:21.294977  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:21.294926  131207 retry.go:31] will retry after 648.218445ms: waiting for machine to come up
	I0719 03:38:21.944316  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:21.944626  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:21.944655  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:21.944603  131207 retry.go:31] will retry after 1.176923661s: waiting for machine to come up
	I0719 03:38:23.123171  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:23.123736  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:23.124004  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:23.123757  131207 retry.go:31] will retry after 1.468362907s: waiting for machine to come up
	I0719 03:38:24.593786  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:24.594214  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:24.594240  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:24.594178  131207 retry.go:31] will retry after 1.598210718s: waiting for machine to come up
	I0719 03:38:26.195298  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:26.195766  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:26.195795  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:26.195719  131207 retry.go:31] will retry after 1.801872126s: waiting for machine to come up
	I0719 03:38:27.998864  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:27.999380  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:27.999417  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:27.999320  131207 retry.go:31] will retry after 2.721083851s: waiting for machine to come up
	I0719 03:38:30.724181  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:30.724677  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:30.724704  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:30.724599  131207 retry.go:31] will retry after 2.739187243s: waiting for machine to come up
	I0719 03:38:33.466115  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:33.466459  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find current IP address of domain addons-513705 in network mk-addons-513705
	I0719 03:38:33.466492  131185 main.go:141] libmachine: (addons-513705) DBG | I0719 03:38:33.466409  131207 retry.go:31] will retry after 3.37916142s: waiting for machine to come up
	I0719 03:38:36.846770  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:36.847223  131185 main.go:141] libmachine: (addons-513705) Found IP for machine: 192.168.39.209
	I0719 03:38:36.847247  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has current primary IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:36.847253  131185 main.go:141] libmachine: (addons-513705) Reserving static IP address...
	I0719 03:38:36.847666  131185 main.go:141] libmachine: (addons-513705) DBG | unable to find host DHCP lease matching {name: "addons-513705", mac: "52:54:00:d1:ce:f2", ip: "192.168.39.209"} in network mk-addons-513705
	I0719 03:38:36.916777  131185 main.go:141] libmachine: (addons-513705) DBG | Getting to WaitForSSH function...
	I0719 03:38:36.916811  131185 main.go:141] libmachine: (addons-513705) Reserved static IP address: 192.168.39.209
	I0719 03:38:36.916828  131185 main.go:141] libmachine: (addons-513705) Waiting for SSH to be available...
	I0719 03:38:36.919361  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:36.919819  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:36.919849  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:36.919934  131185 main.go:141] libmachine: (addons-513705) DBG | Using SSH client type: external
	I0719 03:38:36.919962  131185 main.go:141] libmachine: (addons-513705) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa (-rw-------)
	I0719 03:38:36.919999  131185 main.go:141] libmachine: (addons-513705) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 03:38:36.920014  131185 main.go:141] libmachine: (addons-513705) DBG | About to run SSH command:
	I0719 03:38:36.920042  131185 main.go:141] libmachine: (addons-513705) DBG | exit 0
	I0719 03:38:37.048696  131185 main.go:141] libmachine: (addons-513705) DBG | SSH cmd err, output: <nil>: 
	I0719 03:38:37.048944  131185 main.go:141] libmachine: (addons-513705) KVM machine creation complete!
	I0719 03:38:37.049310  131185 main.go:141] libmachine: (addons-513705) Calling .GetConfigRaw
	I0719 03:38:37.049944  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:38:37.050194  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:38:37.050374  131185 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 03:38:37.050390  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:38:37.051577  131185 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 03:38:37.051595  131185 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 03:38:37.051603  131185 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 03:38:37.051610  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:37.054054  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.054412  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:37.054440  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.054565  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:37.054739  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.054897  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.055125  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:37.055306  131185 main.go:141] libmachine: Using SSH client type: native
	I0719 03:38:37.055556  131185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0719 03:38:37.055568  131185 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 03:38:37.152392  131185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:38:37.152416  131185 main.go:141] libmachine: Detecting the provisioner...
	I0719 03:38:37.152424  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:37.155193  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.155517  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:37.155536  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.155682  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:37.155881  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.156052  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.156247  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:37.156419  131185 main.go:141] libmachine: Using SSH client type: native
	I0719 03:38:37.156585  131185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0719 03:38:37.156595  131185 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 03:38:37.257470  131185 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 03:38:37.257547  131185 main.go:141] libmachine: found compatible host: buildroot
	I0719 03:38:37.257556  131185 main.go:141] libmachine: Provisioning with buildroot...
	I0719 03:38:37.257565  131185 main.go:141] libmachine: (addons-513705) Calling .GetMachineName
	I0719 03:38:37.257799  131185 buildroot.go:166] provisioning hostname "addons-513705"
	I0719 03:38:37.257825  131185 main.go:141] libmachine: (addons-513705) Calling .GetMachineName
	I0719 03:38:37.258026  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:37.261977  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.262336  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:37.262357  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.262522  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:37.262702  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.262870  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.263045  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:37.263250  131185 main.go:141] libmachine: Using SSH client type: native
	I0719 03:38:37.263481  131185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0719 03:38:37.263501  131185 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-513705 && echo "addons-513705" | sudo tee /etc/hostname
	I0719 03:38:37.382210  131185 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-513705
	
	I0719 03:38:37.382239  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:37.384851  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.385283  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:37.385315  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.385454  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:37.385662  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.385868  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.386027  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:37.386189  131185 main.go:141] libmachine: Using SSH client type: native
	I0719 03:38:37.386404  131185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0719 03:38:37.386423  131185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-513705' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-513705/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-513705' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 03:38:37.492646  131185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:38:37.492676  131185 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 03:38:37.492707  131185 buildroot.go:174] setting up certificates
	I0719 03:38:37.492719  131185 provision.go:84] configureAuth start
	I0719 03:38:37.492729  131185 main.go:141] libmachine: (addons-513705) Calling .GetMachineName
	I0719 03:38:37.492993  131185 main.go:141] libmachine: (addons-513705) Calling .GetIP
	I0719 03:38:37.495205  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.495507  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:37.495544  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.495707  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:37.497515  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.497830  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:37.497860  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.497939  131185 provision.go:143] copyHostCerts
	I0719 03:38:37.498022  131185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 03:38:37.498130  131185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 03:38:37.498189  131185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 03:38:37.498235  131185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.addons-513705 san=[127.0.0.1 192.168.39.209 addons-513705 localhost minikube]
	I0719 03:38:37.646369  131185 provision.go:177] copyRemoteCerts
	I0719 03:38:37.646429  131185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 03:38:37.646455  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:37.648707  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.648951  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:37.648976  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.649152  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:37.649387  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.649546  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:37.649685  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:38:37.730705  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 03:38:37.752926  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 03:38:37.773807  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 03:38:37.795235  131185 provision.go:87] duration metric: took 302.500173ms to configureAuth
	I0719 03:38:37.795267  131185 buildroot.go:189] setting minikube options for container-runtime
	I0719 03:38:37.795434  131185 config.go:182] Loaded profile config "addons-513705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 03:38:37.795507  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:37.798024  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.798413  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:37.798442  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:37.798619  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:37.798836  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.798995  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:37.799244  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:37.799414  131185 main.go:141] libmachine: Using SSH client type: native
	I0719 03:38:37.799567  131185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0719 03:38:37.799580  131185 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 03:38:38.041175  131185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 03:38:38.041213  131185 main.go:141] libmachine: Checking connection to Docker...
	I0719 03:38:38.041226  131185 main.go:141] libmachine: (addons-513705) Calling .GetURL
	I0719 03:38:38.042552  131185 main.go:141] libmachine: (addons-513705) DBG | Using libvirt version 6000000
	I0719 03:38:38.044511  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.044813  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:38.044837  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.045003  131185 main.go:141] libmachine: Docker is up and running!
	I0719 03:38:38.045019  131185 main.go:141] libmachine: Reticulating splines...
	I0719 03:38:38.045028  131185 client.go:171] duration metric: took 21.241929511s to LocalClient.Create
	I0719 03:38:38.045046  131185 start.go:167] duration metric: took 21.241989149s to libmachine.API.Create "addons-513705"
	I0719 03:38:38.045055  131185 start.go:293] postStartSetup for "addons-513705" (driver="kvm2")
	I0719 03:38:38.045080  131185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 03:38:38.045106  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:38:38.045366  131185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 03:38:38.045393  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:38.047456  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.047738  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:38.047764  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.047862  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:38.048034  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:38.048168  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:38.048325  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:38:38.126784  131185 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 03:38:38.130524  131185 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 03:38:38.130547  131185 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 03:38:38.130630  131185 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 03:38:38.130655  131185 start.go:296] duration metric: took 85.594604ms for postStartSetup
	I0719 03:38:38.130689  131185 main.go:141] libmachine: (addons-513705) Calling .GetConfigRaw
	I0719 03:38:38.131270  131185 main.go:141] libmachine: (addons-513705) Calling .GetIP
	I0719 03:38:38.133835  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.134180  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:38.134212  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.134445  131185 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/config.json ...
	I0719 03:38:38.134629  131185 start.go:128] duration metric: took 21.349190104s to createHost
	I0719 03:38:38.134654  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:38.137089  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.137401  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:38.137428  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.137590  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:38.137783  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:38.137987  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:38.138129  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:38.138295  131185 main.go:141] libmachine: Using SSH client type: native
	I0719 03:38:38.138500  131185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.209 22 <nil> <nil>}
	I0719 03:38:38.138515  131185 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 03:38:38.237396  131185 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721360318.211609634
	
	I0719 03:38:38.237428  131185 fix.go:216] guest clock: 1721360318.211609634
	I0719 03:38:38.237441  131185 fix.go:229] Guest: 2024-07-19 03:38:38.211609634 +0000 UTC Remote: 2024-07-19 03:38:38.134642002 +0000 UTC m=+21.449634682 (delta=76.967632ms)
	I0719 03:38:38.237468  131185 fix.go:200] guest clock delta is within tolerance: 76.967632ms
	I0719 03:38:38.237475  131185 start.go:83] releasing machines lock for "addons-513705", held for 21.452104057s
	I0719 03:38:38.237503  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:38:38.237849  131185 main.go:141] libmachine: (addons-513705) Calling .GetIP
	I0719 03:38:38.240389  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.240729  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:38.240759  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.240940  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:38:38.241435  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:38:38.241605  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:38:38.241689  131185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 03:38:38.241733  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:38.241828  131185 ssh_runner.go:195] Run: cat /version.json
	I0719 03:38:38.241853  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:38:38.244208  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.244514  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:38.244542  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.244595  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.244762  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:38.244917  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:38.245024  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:38.245045  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:38.245107  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:38.245194  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:38:38.245285  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:38:38.245323  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:38:38.245412  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:38:38.245582  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:38:38.317434  131185 ssh_runner.go:195] Run: systemctl --version
	I0719 03:38:38.351110  131185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 03:38:38.504130  131185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 03:38:38.509877  131185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 03:38:38.509947  131185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 03:38:38.525809  131185 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 03:38:38.525834  131185 start.go:495] detecting cgroup driver to use...
	I0719 03:38:38.525906  131185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 03:38:38.542360  131185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:38:38.557857  131185 docker.go:217] disabling cri-docker service (if available) ...
	I0719 03:38:38.557924  131185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 03:38:38.572797  131185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 03:38:38.585477  131185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 03:38:38.704886  131185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 03:38:38.857108  131185 docker.go:233] disabling docker service ...
	I0719 03:38:38.857171  131185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 03:38:38.870939  131185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 03:38:38.882906  131185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 03:38:39.002092  131185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 03:38:39.106341  131185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 03:38:39.119472  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:38:39.135830  131185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 03:38:39.135889  131185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 03:38:39.145391  131185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 03:38:39.145457  131185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 03:38:39.154884  131185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 03:38:39.164204  131185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 03:38:39.173601  131185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 03:38:39.183738  131185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 03:38:39.193270  131185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 03:38:39.208734  131185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 03:38:39.218434  131185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 03:38:39.227317  131185 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 03:38:39.227381  131185 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 03:38:39.239689  131185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 03:38:39.248262  131185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:38:39.349551  131185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 03:38:39.482131  131185 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 03:38:39.482219  131185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 03:38:39.486361  131185 start.go:563] Will wait 60s for crictl version
	I0719 03:38:39.486446  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:38:39.489675  131185 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 03:38:39.525220  131185 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 03:38:39.525313  131185 ssh_runner.go:195] Run: crio --version
	I0719 03:38:39.550166  131185 ssh_runner.go:195] Run: crio --version
	I0719 03:38:39.578483  131185 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 03:38:39.579630  131185 main.go:141] libmachine: (addons-513705) Calling .GetIP
	I0719 03:38:39.582201  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:39.582498  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:38:39.582527  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:38:39.582693  131185 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 03:38:39.586313  131185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 03:38:39.597250  131185 kubeadm.go:883] updating cluster {Name:addons-513705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-513705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 03:38:39.597366  131185 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 03:38:39.597407  131185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 03:38:39.626741  131185 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 03:38:39.626809  131185 ssh_runner.go:195] Run: which lz4
	I0719 03:38:39.630314  131185 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 03:38:39.633863  131185 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 03:38:39.633891  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 03:38:40.815948  131185 crio.go:462] duration metric: took 1.185660876s to copy over tarball
	I0719 03:38:40.816034  131185 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 03:38:42.943027  131185 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.126957751s)
	I0719 03:38:42.943063  131185 crio.go:469] duration metric: took 2.127079014s to extract the tarball
	I0719 03:38:42.943075  131185 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 03:38:42.984543  131185 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 03:38:43.024957  131185 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 03:38:43.024983  131185 cache_images.go:84] Images are preloaded, skipping loading
	I0719 03:38:43.024992  131185 kubeadm.go:934] updating node { 192.168.39.209 8443 v1.30.3 crio true true} ...
	I0719 03:38:43.025145  131185 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-513705 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.209
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-513705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 03:38:43.025239  131185 ssh_runner.go:195] Run: crio config
	I0719 03:38:43.072002  131185 cni.go:84] Creating CNI manager for ""
	I0719 03:38:43.072023  131185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 03:38:43.072033  131185 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 03:38:43.072055  131185 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.209 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-513705 NodeName:addons-513705 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.209"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.209 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 03:38:43.072198  131185 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.209
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-513705"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.209
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.209"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 03:38:43.072261  131185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 03:38:43.082141  131185 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 03:38:43.082228  131185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 03:38:43.091598  131185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 03:38:43.106891  131185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 03:38:43.121619  131185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 03:38:43.136443  131185 ssh_runner.go:195] Run: grep 192.168.39.209	control-plane.minikube.internal$ /etc/hosts
	I0719 03:38:43.139994  131185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.209	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 03:38:43.151480  131185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:38:43.252499  131185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 03:38:43.268025  131185 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705 for IP: 192.168.39.209
	I0719 03:38:43.268061  131185 certs.go:194] generating shared ca certs ...
	I0719 03:38:43.268078  131185 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:43.268212  131185 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 03:38:43.509876  131185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt ...
	I0719 03:38:43.509906  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt: {Name:mkae0969944f0f8e857b31b1ba8dc99e17616a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:43.510104  131185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key ...
	I0719 03:38:43.510123  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key: {Name:mkafaa13726dbf04e101fd71950915865bf71634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:43.510224  131185 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 03:38:43.599006  131185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt ...
	I0719 03:38:43.599036  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt: {Name:mk24d16287c2aa3a0970cd917b42aae64b5f2312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:43.599224  131185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key ...
	I0719 03:38:43.599241  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key: {Name:mk3cea776bb94a1aae88e3aac6fdab13d8a2c5cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:43.599351  131185 certs.go:256] generating profile certs ...
	I0719 03:38:43.599429  131185 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/client.key
	I0719 03:38:43.599445  131185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/client.crt with IP's: []
	I0719 03:38:43.816505  131185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/client.crt ...
	I0719 03:38:43.816534  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/client.crt: {Name:mk9864508cea5ea698de431a72b9fddee2e0e074 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:43.816715  131185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/client.key ...
	I0719 03:38:43.816728  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/client.key: {Name:mk11aada3689912ab9594f3c5c293da7c5c49c10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:43.816820  131185 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.key.b4b44ad1
	I0719 03:38:43.816846  131185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.crt.b4b44ad1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.209]
	I0719 03:38:43.911625  131185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.crt.b4b44ad1 ...
	I0719 03:38:43.911655  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.crt.b4b44ad1: {Name:mkda295f610a4793a93f36cd4a86b9acb57330e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:43.911832  131185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.key.b4b44ad1 ...
	I0719 03:38:43.911852  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.key.b4b44ad1: {Name:mk6a87533632dd9f9738f16d8f9dc1fd4f6b0f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:43.911955  131185 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.crt.b4b44ad1 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.crt
	I0719 03:38:43.912065  131185 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.key.b4b44ad1 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.key
	I0719 03:38:43.912134  131185 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/proxy-client.key
	I0719 03:38:43.912159  131185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/proxy-client.crt with IP's: []
	I0719 03:38:44.019597  131185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/proxy-client.crt ...
	I0719 03:38:44.019624  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/proxy-client.crt: {Name:mk57ef95536bc8a40137668b1fb28c73cdd905d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:44.019807  131185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/proxy-client.key ...
	I0719 03:38:44.019827  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/proxy-client.key: {Name:mk5bbc2ba5514e420654f93a9ebb0f7b90355b13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:38:44.020037  131185 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 03:38:44.020084  131185 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 03:38:44.020113  131185 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 03:38:44.020145  131185 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 03:38:44.020805  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 03:38:44.049823  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 03:38:44.076777  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 03:38:44.098135  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 03:38:44.119212  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 03:38:44.139924  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 03:38:44.160965  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 03:38:44.181702  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/addons-513705/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 03:38:44.202304  131185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 03:38:44.223281  131185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 03:38:44.237907  131185 ssh_runner.go:195] Run: openssl version
	I0719 03:38:44.243493  131185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 03:38:44.253308  131185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:38:44.257515  131185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:38:44.257582  131185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:38:44.262807  131185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 03:38:44.272513  131185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 03:38:44.276149  131185 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 03:38:44.276212  131185 kubeadm.go:392] StartCluster: {Name:addons-513705 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-513705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:38:44.276351  131185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 03:38:44.276393  131185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 03:38:44.309696  131185 cri.go:89] found id: ""
	I0719 03:38:44.309795  131185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 03:38:44.319272  131185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 03:38:44.327797  131185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 03:38:44.336474  131185 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 03:38:44.336496  131185 kubeadm.go:157] found existing configuration files:
	
	I0719 03:38:44.336550  131185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 03:38:44.344697  131185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 03:38:44.344760  131185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 03:38:44.352889  131185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 03:38:44.360680  131185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 03:38:44.360738  131185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 03:38:44.368972  131185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 03:38:44.376843  131185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 03:38:44.376897  131185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 03:38:44.385082  131185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 03:38:44.393172  131185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 03:38:44.393229  131185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 03:38:44.401442  131185 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 03:38:44.460404  131185 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 03:38:44.460481  131185 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 03:38:44.579244  131185 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 03:38:44.579362  131185 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 03:38:44.579513  131185 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 03:38:44.800917  131185 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 03:38:44.850356  131185 out.go:204]   - Generating certificates and keys ...
	I0719 03:38:44.850511  131185 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 03:38:44.850610  131185 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 03:38:44.960835  131185 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 03:38:45.090785  131185 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 03:38:45.176048  131185 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 03:38:45.416385  131185 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 03:38:45.545673  131185 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 03:38:45.545858  131185 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-513705 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	I0719 03:38:45.676381  131185 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 03:38:45.676614  131185 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-513705 localhost] and IPs [192.168.39.209 127.0.0.1 ::1]
	I0719 03:38:45.836404  131185 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 03:38:45.939609  131185 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 03:38:46.006666  131185 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 03:38:46.006935  131185 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 03:38:46.164233  131185 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 03:38:46.379311  131185 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 03:38:46.520188  131185 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 03:38:46.677618  131185 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 03:38:46.924182  131185 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 03:38:46.924890  131185 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 03:38:46.929251  131185 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 03:38:46.931074  131185 out.go:204]   - Booting up control plane ...
	I0719 03:38:46.931180  131185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 03:38:46.931268  131185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 03:38:46.931366  131185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 03:38:46.948779  131185 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 03:38:46.949595  131185 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 03:38:46.949641  131185 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 03:38:47.073577  131185 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 03:38:47.073708  131185 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 03:38:48.074443  131185 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001763761s
	I0719 03:38:48.074552  131185 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 03:38:52.573781  131185 kubeadm.go:310] [api-check] The API server is healthy after 4.502166439s
	I0719 03:38:52.590117  131185 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 03:38:52.603224  131185 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 03:38:52.625933  131185 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 03:38:52.626125  131185 kubeadm.go:310] [mark-control-plane] Marking the node addons-513705 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 03:38:52.638672  131185 kubeadm.go:310] [bootstrap-token] Using token: bnmmv4.impoeox0ib2on8kh
	I0719 03:38:52.640168  131185 out.go:204]   - Configuring RBAC rules ...
	I0719 03:38:52.640315  131185 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 03:38:52.645048  131185 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 03:38:52.658178  131185 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 03:38:52.665110  131185 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 03:38:52.669007  131185 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 03:38:52.676108  131185 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 03:38:52.979888  131185 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 03:38:53.430895  131185 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 03:38:53.979195  131185 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 03:38:53.979959  131185 kubeadm.go:310] 
	I0719 03:38:53.980037  131185 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 03:38:53.980046  131185 kubeadm.go:310] 
	I0719 03:38:53.980179  131185 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 03:38:53.980203  131185 kubeadm.go:310] 
	I0719 03:38:53.980256  131185 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 03:38:53.980360  131185 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 03:38:53.980410  131185 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 03:38:53.980417  131185 kubeadm.go:310] 
	I0719 03:38:53.980496  131185 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 03:38:53.980516  131185 kubeadm.go:310] 
	I0719 03:38:53.980587  131185 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 03:38:53.980596  131185 kubeadm.go:310] 
	I0719 03:38:53.980652  131185 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 03:38:53.980728  131185 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 03:38:53.980799  131185 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 03:38:53.980806  131185 kubeadm.go:310] 
	I0719 03:38:53.980873  131185 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 03:38:53.980952  131185 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 03:38:53.980961  131185 kubeadm.go:310] 
	I0719 03:38:53.981187  131185 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bnmmv4.impoeox0ib2on8kh \
	I0719 03:38:53.981388  131185 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 \
	I0719 03:38:53.981435  131185 kubeadm.go:310] 	--control-plane 
	I0719 03:38:53.981450  131185 kubeadm.go:310] 
	I0719 03:38:53.981521  131185 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 03:38:53.981529  131185 kubeadm.go:310] 
	I0719 03:38:53.981617  131185 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bnmmv4.impoeox0ib2on8kh \
	I0719 03:38:53.981723  131185 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 
	I0719 03:38:53.982059  131185 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 03:38:53.982097  131185 cni.go:84] Creating CNI manager for ""
	I0719 03:38:53.982106  131185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 03:38:53.983759  131185 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 03:38:53.984925  131185 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 03:38:53.998962  131185 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 03:38:54.019769  131185 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 03:38:54.019852  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:54.019850  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-513705 minikube.k8s.io/updated_at=2024_07_19T03_38_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=addons-513705 minikube.k8s.io/primary=true
	I0719 03:38:54.125944  131185 ops.go:34] apiserver oom_adj: -16
	I0719 03:38:54.125975  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:54.626848  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:55.126175  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:55.626947  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:56.127058  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:56.626857  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:57.126928  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:57.626952  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:58.126746  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:58.626098  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:59.126165  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:38:59.626827  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:00.126623  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:00.626301  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:01.126384  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:01.626036  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:02.126906  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:02.626930  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:03.127054  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:03.626534  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:04.126108  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:04.626663  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:05.126682  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:05.626981  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:06.126722  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:06.626090  131185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:39:06.750693  131185 kubeadm.go:1113] duration metric: took 12.730906251s to wait for elevateKubeSystemPrivileges
	I0719 03:39:06.750742  131185 kubeadm.go:394] duration metric: took 22.474536848s to StartCluster
	I0719 03:39:06.750768  131185 settings.go:142] acquiring lock: {Name:mka29304fbead54bd9b698f9018edea7e59177cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:39:06.750923  131185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 03:39:06.751508  131185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/kubeconfig: {Name:mk6e4a1b81f147a5c312ddde5acb372811581248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:39:06.751783  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 03:39:06.751800  131185 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.209 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 03:39:06.751859  131185 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0719 03:39:06.751960  131185 addons.go:69] Setting yakd=true in profile "addons-513705"
	I0719 03:39:06.752011  131185 config.go:182] Loaded profile config "addons-513705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 03:39:06.752020  131185 addons.go:69] Setting registry=true in profile "addons-513705"
	I0719 03:39:06.752006  131185 addons.go:69] Setting inspektor-gadget=true in profile "addons-513705"
	I0719 03:39:06.752051  131185 addons.go:234] Setting addon registry=true in "addons-513705"
	I0719 03:39:06.752026  131185 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-513705"
	I0719 03:39:06.752010  131185 addons.go:69] Setting storage-provisioner=true in profile "addons-513705"
	I0719 03:39:06.752060  131185 addons.go:234] Setting addon inspektor-gadget=true in "addons-513705"
	I0719 03:39:06.752069  131185 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-513705"
	I0719 03:39:06.752079  131185 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-513705"
	I0719 03:39:06.752080  131185 addons.go:234] Setting addon storage-provisioner=true in "addons-513705"
	I0719 03:39:06.752099  131185 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-513705"
	I0719 03:39:06.752104  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.752105  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.752105  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.752047  131185 addons.go:69] Setting volcano=true in profile "addons-513705"
	I0719 03:39:06.752255  131185 addons.go:234] Setting addon volcano=true in "addons-513705"
	I0719 03:39:06.752104  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.752291  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.752610  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.752612  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.752638  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.752614  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.752645  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.752661  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.752664  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.752678  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.752686  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.752709  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.752113  131185 addons.go:69] Setting gcp-auth=true in profile "addons-513705"
	I0719 03:39:06.752112  131185 addons.go:69] Setting cloud-spanner=true in profile "addons-513705"
	I0719 03:39:06.752822  131185 addons.go:234] Setting addon cloud-spanner=true in "addons-513705"
	I0719 03:39:06.752856  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.752111  131185 addons.go:69] Setting metrics-server=true in profile "addons-513705"
	I0719 03:39:06.752118  131185 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-513705"
	I0719 03:39:06.752967  131185 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-513705"
	I0719 03:39:06.753017  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.753146  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.753171  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.753215  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.753248  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.753506  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.752119  131185 addons.go:69] Setting volumesnapshots=true in profile "addons-513705"
	I0719 03:39:06.752124  131185 addons.go:69] Setting helm-tiller=true in profile "addons-513705"
	I0719 03:39:06.752124  131185 addons.go:69] Setting default-storageclass=true in profile "addons-513705"
	I0719 03:39:06.752151  131185 addons.go:234] Setting addon yakd=true in "addons-513705"
	I0719 03:39:06.752786  131185 mustload.go:65] Loading cluster: addons-513705
	I0719 03:39:06.753638  131185 addons.go:234] Setting addon helm-tiller=true in "addons-513705"
	I0719 03:39:06.753678  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.753846  131185 config.go:182] Loaded profile config "addons-513705": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 03:39:06.754004  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.754045  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.754102  131185 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-513705"
	I0719 03:39:06.754309  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.754358  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.754423  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.754464  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.754784  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.755197  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.755219  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.756502  131185 out.go:177] * Verifying Kubernetes components...
	I0719 03:39:06.758077  131185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:39:06.752120  131185 addons.go:69] Setting ingress=true in profile "addons-513705"
	I0719 03:39:06.758370  131185 addons.go:234] Setting addon ingress=true in "addons-513705"
	I0719 03:39:06.758420  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.758805  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.758840  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.752885  131185 addons.go:234] Setting addon metrics-server=true in "addons-513705"
	I0719 03:39:06.763238  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.768577  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.768648  131185 addons.go:234] Setting addon volumesnapshots=true in "addons-513705"
	I0719 03:39:06.768692  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.752119  131185 addons.go:69] Setting ingress-dns=true in profile "addons-513705"
	I0719 03:39:06.768773  131185 addons.go:234] Setting addon ingress-dns=true in "addons-513705"
	I0719 03:39:06.768822  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.774866  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44037
	I0719 03:39:06.774928  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0719 03:39:06.775149  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0719 03:39:06.775589  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.775630  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.775594  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.776156  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.776180  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.776232  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.776266  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.776271  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.776284  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.776621  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.776697  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.776815  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32905
	I0719 03:39:06.776980  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.777233  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.777266  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.777425  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.777977  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.778123  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.778139  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.778666  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.778704  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.779013  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.779281  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.780912  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.789549  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.789590  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.789608  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.789628  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.790097  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.790128  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.790947  131185 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-513705"
	I0719 03:39:06.791001  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.791474  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.791512  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.791570  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35779
	I0719 03:39:06.792061  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.792101  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.792501  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38093
	I0719 03:39:06.792614  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I0719 03:39:06.792769  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.793185  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.793585  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.793605  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.793750  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.793772  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.794047  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.794077  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.794605  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.794645  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.794658  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.794679  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.795032  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.795692  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.795710  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.801190  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.801772  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.801794  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.803398  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39619
	I0719 03:39:06.803861  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.804390  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.804408  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.804748  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.805326  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.805362  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.807362  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46003
	I0719 03:39:06.807736  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.808295  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.808328  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.808756  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.808961  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.811058  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.813242  131185 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0719 03:39:06.813683  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0719 03:39:06.814170  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.814672  131185 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0719 03:39:06.814697  131185 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0719 03:39:06.814720  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.814676  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.814787  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.815173  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.815675  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.815711  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.818832  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.819302  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.819324  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.819612  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.819773  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.819907  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.820050  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.822974  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40029
	I0719 03:39:06.823576  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.824148  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.824168  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.824540  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.824728  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.826752  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.829258  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
	I0719 03:39:06.829714  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.830220  131185 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0719 03:39:06.830288  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.830306  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.830725  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.831560  131185 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0719 03:39:06.831581  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0719 03:39:06.831599  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.831830  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.831872  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.834792  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.835136  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.835152  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.835179  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0719 03:39:06.835543  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.835604  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35371
	I0719 03:39:06.835686  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.835924  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.836070  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.836309  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.836820  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.836834  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.837191  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.837451  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.837702  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43267
	I0719 03:39:06.838009  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.838281  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.838633  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.838659  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.839183  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.839354  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.839711  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.839736  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.840358  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.840377  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.840675  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.841288  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.841339  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.841694  131185 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 03:39:06.842948  131185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 03:39:06.842969  131185 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 03:39:06.842992  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.843447  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
	I0719 03:39:06.843930  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.844496  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.844513  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.844913  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.845552  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.845591  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.846544  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.847109  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.847131  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.847335  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.847538  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.847732  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.847877  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.863412  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41845
	I0719 03:39:06.863412  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I0719 03:39:06.864083  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.864286  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.864845  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.864867  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.865002  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.865032  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.865436  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.865437  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.865676  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.865680  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.866597  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34825
	I0719 03:39:06.866980  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.867491  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.867513  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.867858  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.867918  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.867954  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.868430  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.869965  131185 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0719 03:39:06.869965  131185 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0719 03:39:06.870383  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.870452  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38643
	I0719 03:39:06.870894  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.871323  131185 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 03:39:06.871346  131185 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 03:39:06.871367  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.871411  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.871432  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.872075  131185 out.go:177]   - Using image docker.io/registry:2.8.3
	I0719 03:39:06.872133  131185 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0719 03:39:06.872148  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0719 03:39:06.872180  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.872185  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.872422  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.873609  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44315
	I0719 03:39:06.874137  131185 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0719 03:39:06.874454  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I0719 03:39:06.874975  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.875306  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.875339  131185 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0719 03:39:06.875350  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0719 03:39:06.875365  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.875985  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.876002  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.876016  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0719 03:39:06.876401  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.876419  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.876467  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.876626  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.876827  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.876849  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.876950  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.877030  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.877408  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.877484  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43515
	I0719 03:39:06.877576  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.877729  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.877923  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.878063  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.878434  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.878568  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.878644  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.878838  131185 addons.go:234] Setting addon default-storageclass=true in "addons-513705"
	I0719 03:39:06.878883  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:06.879115  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.879230  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.879263  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.879621  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.879649  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.879655  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.879916  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.879939  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.879980  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.880177  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.880192  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.880225  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.880909  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.880929  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.880910  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.881112  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.881247  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.881295  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.881312  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.881421  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.881686  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.881887  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.882314  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.882361  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.882557  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.882662  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40327
	I0719 03:39:06.882743  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.882964  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.883420  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.883437  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.883973  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.885875  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I0719 03:39:06.886192  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.886547  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.887001  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.887026  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.887392  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.887568  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.888420  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.891192  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41407
	I0719 03:39:06.891682  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.891753  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.892275  131185 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0719 03:39:06.892741  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.892760  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.893158  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.893323  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.893812  131185 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0719 03:39:06.894118  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42509
	I0719 03:39:06.894150  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0719 03:39:06.894928  131185 out.go:177]   - Using image docker.io/busybox:stable
	I0719 03:39:06.895110  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.895701  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.895782  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.895942  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.895955  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.896103  131185 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 03:39:06.896119  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0719 03:39:06.896139  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.896151  131185 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 03:39:06.896321  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.896514  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.897162  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.897196  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.897575  131185 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0719 03:39:06.898235  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.898567  131185 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 03:39:06.898570  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.898855  131185 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 03:39:06.898871  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0719 03:39:06.898940  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.899326  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.900014  131185 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 03:39:06.900031  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0719 03:39:06.900048  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.900559  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.901491  131185 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 03:39:06.902793  131185 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:39:06.902810  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 03:39:06.902830  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.902910  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.902938  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
	I0719 03:39:06.903149  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.903167  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.903200  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.903212  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.903396  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.903452  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.903587  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.903805  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38551
	I0719 03:39:06.903927  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.904211  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.904233  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.904094  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.904102  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.904154  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.904587  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.904728  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.904791  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.904874  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.904889  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.904958  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.905137  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.905130  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.905501  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.905555  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.905798  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:06.905806  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.905812  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:06.905823  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.906033  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:06.906056  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:06.906063  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:06.906071  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:06.906077  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:06.906361  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:06.906396  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:06.906405  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	W0719 03:39:06.906474  131185 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0719 03:39:06.906687  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.907223  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.907340  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.907361  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.907795  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.907817  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.907978  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.908132  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.908244  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.908516  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.908633  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.908688  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.908883  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.910883  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.911013  131185 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0719 03:39:06.912869  131185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0719 03:39:06.912958  131185 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 03:39:06.912984  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0719 03:39:06.913000  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.914464  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38469
	I0719 03:39:06.915064  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.915479  131185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0719 03:39:06.915754  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.915777  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.916120  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.916251  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I0719 03:39:06.916301  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.916438  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.916670  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.916744  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.916773  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.917034  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.917296  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.917552  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.917571  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.917612  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.917729  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.918074  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.918082  131185 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0719 03:39:06.918264  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.918644  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:06.918896  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:06.920083  131185 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0719 03:39:06.920879  131185 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0719 03:39:06.921593  131185 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0719 03:39:06.921608  131185 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0719 03:39:06.921623  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.923495  131185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0719 03:39:06.924771  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.924808  131185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0719 03:39:06.925357  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.925382  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.925539  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.925741  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.925935  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.926088  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:06.927562  131185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0719 03:39:06.929444  131185 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0719 03:39:06.930628  131185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0719 03:39:06.930651  131185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0719 03:39:06.930679  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.934464  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.935030  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.935069  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.935428  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.935668  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.935865  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.936036  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	W0719 03:39:06.950097  131185 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:52802->192.168.39.209:22: read: connection reset by peer
	I0719 03:39:06.950129  131185 retry.go:31] will retry after 301.821335ms: ssh: handshake failed: read tcp 192.168.39.1:52802->192.168.39.209:22: read: connection reset by peer
	I0719 03:39:06.964075  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37997
	I0719 03:39:06.964481  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:06.964923  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:06.964949  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:06.965306  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:06.965513  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:06.967137  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:06.967350  131185 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 03:39:06.967365  131185 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 03:39:06.967381  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:06.970068  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.970472  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:06.970503  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:06.970637  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:06.970787  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:06.970941  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:06.971076  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:07.220285  131185 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0719 03:39:07.220323  131185 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0719 03:39:07.231633  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 03:39:07.241434  131185 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0719 03:39:07.241465  131185 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0719 03:39:07.253409  131185 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 03:39:07.253430  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 03:39:07.276927  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 03:39:07.302765  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 03:39:07.315602  131185 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0719 03:39:07.315631  131185 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0719 03:39:07.329287  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 03:39:07.332457  131185 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 03:39:07.332475  131185 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 03:39:07.340257  131185 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0719 03:39:07.340278  131185 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0719 03:39:07.350798  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 03:39:07.353090  131185 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 03:39:07.353114  131185 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0719 03:39:07.380025  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0719 03:39:07.390588  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 03:39:07.390607  131185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 03:39:07.397953  131185 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0719 03:39:07.397973  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0719 03:39:07.432885  131185 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 03:39:07.432920  131185 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 03:39:07.435187  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:39:07.460492  131185 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0719 03:39:07.460523  131185 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0719 03:39:07.510911  131185 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0719 03:39:07.510941  131185 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0719 03:39:07.538971  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 03:39:07.539929  131185 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 03:39:07.539953  131185 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 03:39:07.552570  131185 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 03:39:07.552603  131185 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 03:39:07.574696  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0719 03:39:07.604217  131185 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0719 03:39:07.604248  131185 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0719 03:39:07.676790  131185 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0719 03:39:07.676818  131185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0719 03:39:07.692213  131185 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0719 03:39:07.692235  131185 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0719 03:39:07.707201  131185 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 03:39:07.707231  131185 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 03:39:07.741909  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 03:39:07.783304  131185 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0719 03:39:07.783338  131185 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0719 03:39:07.817867  131185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0719 03:39:07.817898  131185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0719 03:39:07.833290  131185 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0719 03:39:07.833322  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0719 03:39:07.860071  131185 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 03:39:07.860095  131185 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 03:39:07.973221  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0719 03:39:07.974948  131185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0719 03:39:07.974979  131185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0719 03:39:08.057798  131185 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0719 03:39:08.057831  131185 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0719 03:39:08.074120  131185 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 03:39:08.074151  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 03:39:08.230569  131185 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0719 03:39:08.230647  131185 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0719 03:39:08.264519  131185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0719 03:39:08.264562  131185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0719 03:39:08.329801  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 03:39:08.537955  131185 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 03:39:08.537980  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0719 03:39:08.632435  131185 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0719 03:39:08.632466  131185 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0719 03:39:08.640134  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.408463412s)
	I0719 03:39:08.640192  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:08.640203  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:08.640518  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:08.640538  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:08.640547  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:08.640555  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:08.640786  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:08.640803  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:08.640812  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:08.799743  131185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0719 03:39:08.799772  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0719 03:39:08.801525  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 03:39:09.057999  131185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0719 03:39:09.058026  131185 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0719 03:39:09.259570  131185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0719 03:39:09.259610  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0719 03:39:09.457507  131185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0719 03:39:09.457538  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0719 03:39:09.590114  131185 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 03:39:09.590142  131185 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0719 03:39:09.700136  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 03:39:10.102174  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.825208692s)
	I0719 03:39:10.102229  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:10.102174  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.799370849s)
	I0719 03:39:10.102268  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:10.102281  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:10.102243  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:10.102656  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:10.102701  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:10.102718  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:10.102718  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:10.102741  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:10.102755  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:10.102762  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:10.102726  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:10.102915  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:10.102932  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:10.103010  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:10.103030  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:10.103052  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:10.103178  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:10.103220  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:10.103233  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:10.209745  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:10.209774  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:10.210146  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:10.210165  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:13.910443  131185 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 03:39:13.910487  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:13.914046  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:13.914518  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:13.914543  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:13.914740  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:13.914975  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:13.915134  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:13.915289  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:14.218433  131185 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 03:39:14.278475  131185 addons.go:234] Setting addon gcp-auth=true in "addons-513705"
	I0719 03:39:14.278543  131185 host.go:66] Checking if "addons-513705" exists ...
	I0719 03:39:14.279015  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:14.279072  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:14.294431  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37385
	I0719 03:39:14.294925  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:14.295505  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:14.295532  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:14.295889  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:14.296437  131185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 03:39:14.296465  131185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 03:39:14.312550  131185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
	I0719 03:39:14.313106  131185 main.go:141] libmachine: () Calling .GetVersion
	I0719 03:39:14.313714  131185 main.go:141] libmachine: Using API Version  1
	I0719 03:39:14.313738  131185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 03:39:14.314100  131185 main.go:141] libmachine: () Calling .GetMachineName
	I0719 03:39:14.314344  131185 main.go:141] libmachine: (addons-513705) Calling .GetState
	I0719 03:39:14.316010  131185 main.go:141] libmachine: (addons-513705) Calling .DriverName
	I0719 03:39:14.316251  131185 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 03:39:14.316281  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHHostname
	I0719 03:39:14.319406  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:14.319896  131185 main.go:141] libmachine: (addons-513705) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:ce:f2", ip: ""} in network mk-addons-513705: {Iface:virbr1 ExpiryTime:2024-07-19 04:38:31 +0000 UTC Type:0 Mac:52:54:00:d1:ce:f2 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:addons-513705 Clientid:01:52:54:00:d1:ce:f2}
	I0719 03:39:14.319924  131185 main.go:141] libmachine: (addons-513705) DBG | domain addons-513705 has defined IP address 192.168.39.209 and MAC address 52:54:00:d1:ce:f2 in network mk-addons-513705
	I0719 03:39:14.320170  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHPort
	I0719 03:39:14.320364  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHKeyPath
	I0719 03:39:14.320558  131185 main.go:141] libmachine: (addons-513705) Calling .GetSSHUsername
	I0719 03:39:14.320747  131185 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/addons-513705/id_rsa Username:docker}
	I0719 03:39:14.983847  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.654521098s)
	I0719 03:39:14.983902  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.633070728s)
	I0719 03:39:14.983945  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.983967  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.983985  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.603930313s)
	I0719 03:39:14.983906  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.984027  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.984028  131185 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.593413419s)
	I0719 03:39:14.984036  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984044  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984047  131185 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 03:39:14.984068  131185 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.593434008s)
	I0719 03:39:14.984157  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.548949738s)
	I0719 03:39:14.984750  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.984763  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984219  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.445218177s)
	I0719 03:39:14.984817  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.984824  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984267  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.40953387s)
	I0719 03:39:14.984861  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.984867  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984357  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.242419442s)
	I0719 03:39:14.984901  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.984908  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984391  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.01113851s)
	I0719 03:39:14.984939  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.984946  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984482  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.654653416s)
	W0719 03:39:14.985050  131185 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 03:39:14.985105  131185 retry.go:31] will retry after 304.707177ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 03:39:14.984518  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.985137  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.985147  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.985154  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984535  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.984543  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.984553  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.183000925s)
	I0719 03:39:14.985217  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.985223  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984590  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.984611  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.985405  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.985416  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.985423  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.984613  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.985462  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.985467  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.985472  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.985476  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.985480  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.985485  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.985490  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.985536  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.985555  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.985561  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.985569  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.985576  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.985616  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.985635  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.985641  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.985647  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.985654  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.985692  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.985712  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.985718  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.985725  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.985732  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.986486  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.986516  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.986503  131185 node_ready.go:35] waiting up to 6m0s for node "addons-513705" to be "Ready" ...
	I0719 03:39:14.986547  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.986556  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.986557  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.986564  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.986569  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.986576  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.986579  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:14.986588  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:14.986700  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.986721  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.986841  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.986851  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.986978  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.986990  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.988497  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.988533  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.988540  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.988550  131185 addons.go:475] Verifying addon ingress=true in "addons-513705"
	I0719 03:39:14.988937  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.988976  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.988987  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.989161  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.989189  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.989199  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.989207  131185 addons.go:475] Verifying addon metrics-server=true in "addons-513705"
	I0719 03:39:14.989589  131185 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-513705 service yakd-dashboard -n yakd-dashboard
	
	I0719 03:39:14.988754  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.991080  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.988775  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.988797  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.991156  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.988814  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.991169  131185 addons.go:475] Verifying addon registry=true in "addons-513705"
	I0719 03:39:14.988832  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.991196  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.988847  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:14.988873  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:14.991343  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:14.991596  131185 out.go:177] * Verifying ingress addon...
	I0719 03:39:14.992597  131185 out.go:177] * Verifying registry addon...
	I0719 03:39:14.994409  131185 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 03:39:14.994488  131185 node_ready.go:49] node "addons-513705" has status "Ready":"True"
	I0719 03:39:14.994509  131185 node_ready.go:38] duration metric: took 7.969536ms for node "addons-513705" to be "Ready" ...
	I0719 03:39:14.994523  131185 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 03:39:14.994998  131185 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 03:39:15.014581  131185 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4f56n" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.029946  131185 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 03:39:15.029974  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:15.030177  131185 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 03:39:15.030194  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:15.035706  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:15.035725  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:15.036003  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:15.036051  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:15.036065  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:15.290409  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 03:39:15.489420  131185 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-513705" context rescaled to 1 replicas
	I0719 03:39:15.501664  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:15.506312  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:15.521964  131185 pod_ready.go:92] pod "coredns-7db6d8ff4d-4f56n" in "kube-system" namespace has status "Ready":"True"
	I0719 03:39:15.521997  131185 pod_ready.go:81] duration metric: took 507.387029ms for pod "coredns-7db6d8ff4d-4f56n" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.522010  131185 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lhxb5" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.529383  131185 pod_ready.go:92] pod "coredns-7db6d8ff4d-lhxb5" in "kube-system" namespace has status "Ready":"True"
	I0719 03:39:15.529413  131185 pod_ready.go:81] duration metric: took 7.393209ms for pod "coredns-7db6d8ff4d-lhxb5" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.529425  131185 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-513705" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.538585  131185 pod_ready.go:92] pod "etcd-addons-513705" in "kube-system" namespace has status "Ready":"True"
	I0719 03:39:15.538608  131185 pod_ready.go:81] duration metric: took 9.174077ms for pod "etcd-addons-513705" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.538620  131185 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-513705" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.549779  131185 pod_ready.go:92] pod "kube-apiserver-addons-513705" in "kube-system" namespace has status "Ready":"True"
	I0719 03:39:15.549810  131185 pod_ready.go:81] duration metric: took 11.181293ms for pod "kube-apiserver-addons-513705" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.549823  131185 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-513705" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.794774  131185 pod_ready.go:92] pod "kube-controller-manager-addons-513705" in "kube-system" namespace has status "Ready":"True"
	I0719 03:39:15.794802  131185 pod_ready.go:81] duration metric: took 244.969927ms for pod "kube-controller-manager-addons-513705" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.794817  131185 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-5zw5l" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:15.985892  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.285694554s)
	I0719 03:39:15.985938  131185 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.669658596s)
	I0719 03:39:15.985969  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:15.985983  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:15.986323  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:15.986344  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:15.986354  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:15.986363  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:15.986372  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:15.986616  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:15.986638  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:15.986648  131185 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-513705"
	I0719 03:39:15.986652  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:15.987537  131185 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0719 03:39:15.988480  131185 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 03:39:15.990021  131185 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 03:39:15.990800  131185 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 03:39:15.991102  131185 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 03:39:15.991124  131185 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 03:39:16.028658  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:16.028692  131185 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 03:39:16.028707  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:16.043186  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:16.061720  131185 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 03:39:16.061746  131185 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 03:39:16.170357  131185 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 03:39:16.170384  131185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0719 03:39:16.189643  131185 pod_ready.go:92] pod "kube-proxy-5zw5l" in "kube-system" namespace has status "Ready":"True"
	I0719 03:39:16.189667  131185 pod_ready.go:81] duration metric: took 394.843459ms for pod "kube-proxy-5zw5l" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:16.189678  131185 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-513705" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:16.205431  131185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 03:39:16.496492  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:16.509105  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:16.509772  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:16.615558  131185 pod_ready.go:92] pod "kube-scheduler-addons-513705" in "kube-system" namespace has status "Ready":"True"
	I0719 03:39:16.615580  131185 pod_ready.go:81] duration metric: took 425.896299ms for pod "kube-scheduler-addons-513705" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:16.615591  131185 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace to be "Ready" ...
	I0719 03:39:16.999331  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:17.002977  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:17.004038  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:17.532433  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:17.539396  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:17.543718  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:17.618584  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.328118863s)
	I0719 03:39:17.618602  131185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.413136099s)
	I0719 03:39:17.618646  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:17.618660  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:17.618646  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:17.618730  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:17.618984  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:17.619018  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:17.619015  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:17.619050  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:17.619069  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:17.619100  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:17.619113  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:17.619127  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:17.619130  131185 main.go:141] libmachine: Making call to close driver server
	I0719 03:39:17.619145  131185 main.go:141] libmachine: (addons-513705) Calling .Close
	I0719 03:39:17.619322  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:17.619349  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:17.619393  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:17.619405  131185 main.go:141] libmachine: (addons-513705) DBG | Closing plugin on server side
	I0719 03:39:17.619447  131185 main.go:141] libmachine: Successfully made call to close driver server
	I0719 03:39:17.619456  131185 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 03:39:17.621078  131185 addons.go:475] Verifying addon gcp-auth=true in "addons-513705"
	I0719 03:39:17.623448  131185 out.go:177] * Verifying gcp-auth addon...
	I0719 03:39:17.625757  131185 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 03:39:17.629974  131185 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 03:39:17.629990  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:17.997383  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:17.999544  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:18.000341  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:18.129859  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:18.496845  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:18.502101  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:18.503963  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:18.621517  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:18.629658  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:18.996328  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:19.000293  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:19.000814  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:19.129439  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:19.497713  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:19.500255  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:19.501126  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:19.629212  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:19.996468  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:19.999515  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:19.999677  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:20.129256  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:20.498638  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:20.509551  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:20.509748  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:20.623016  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:20.628712  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:20.998183  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:21.002204  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:21.002978  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:21.129171  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:21.497148  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:21.499835  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:21.501129  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:21.629331  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:21.997443  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:22.001890  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:22.004271  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:22.128248  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:22.496528  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:22.500201  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:22.500410  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:22.623084  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:22.629409  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:22.999147  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:23.001182  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:23.002515  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:23.129210  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:23.496228  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:23.498171  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:23.499573  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:23.628866  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:24.220747  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:24.221426  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:24.222955  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:24.225054  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:24.497031  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:24.499546  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:24.500395  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:24.628886  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:24.997643  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:24.999676  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:25.000243  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:25.120569  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:25.128887  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:25.498707  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:25.500555  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:25.501087  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:25.628916  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:25.996281  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:26.002922  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:26.003051  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:26.129335  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:26.496440  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:26.499891  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:26.499964  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:26.629472  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:26.998154  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:26.998277  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:27.000343  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:27.120822  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:27.129212  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:27.496471  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:27.500688  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:27.500816  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:27.629601  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:27.997868  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:28.000405  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:28.000660  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:28.128889  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:28.497498  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:28.499149  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:28.504025  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:28.633752  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:28.996322  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:28.999213  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:28.999781  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:29.121861  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:29.129359  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:29.993375  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:29.994633  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:29.995149  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:29.997173  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:30.002336  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:30.003058  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:30.003144  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:30.129811  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:30.495480  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:30.497817  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:30.499777  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:30.629002  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:30.996331  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:30.999091  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:31.003128  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:31.129306  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:31.497151  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:31.498431  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:31.503398  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:31.623776  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:31.628895  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:31.999549  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:32.007564  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:32.009587  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:32.438360  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:32.499103  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:32.501171  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:32.501364  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:32.629240  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:32.995729  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:32.998353  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:33.000551  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:33.129210  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:33.496596  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:33.498997  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:33.499136  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:33.628988  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:33.997891  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:34.002237  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:34.002521  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:34.121590  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:34.128784  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:34.509440  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:34.510151  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:34.511273  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:34.630487  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:35.001925  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:35.002305  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:35.002369  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:35.129725  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:35.498900  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:35.500359  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:35.505173  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:35.629399  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:35.997519  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:36.003234  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:36.003700  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:36.129398  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:36.495917  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:36.498679  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:36.498836  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:36.622196  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:36.628870  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:36.996246  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:37.001168  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:37.001628  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:37.128315  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:37.496365  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:37.500293  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:37.500506  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:37.628921  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:37.996052  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:37.999260  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:38.021456  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:38.129217  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:38.555933  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:38.556331  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:38.558119  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:38.622371  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:38.629257  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:38.997376  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:38.999863  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:39.000975  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:39.129511  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:39.498141  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:39.498675  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:39.500064  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:39.629046  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:39.996517  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:40.000799  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:40.001098  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:40.128387  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:40.496778  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:40.498914  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:40.499229  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:40.628690  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:40.995518  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:40.999488  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:41.001818  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:41.121965  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:41.131248  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:41.496132  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:41.499666  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:41.499833  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:41.628632  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:41.996622  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:41.998983  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:41.999468  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:42.128635  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:42.496600  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:42.499027  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:42.499250  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:42.628508  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:42.996931  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:42.999510  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:42.999845  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:43.122064  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:43.129714  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:43.496339  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:43.499058  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:43.499365  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:43.629357  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:43.997244  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:43.999360  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:44.001380  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:44.128882  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:44.496937  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:44.500403  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:44.500644  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:44.628841  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:44.997444  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:44.998870  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:45.001217  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:45.128668  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:45.496511  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:45.499446  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:45.504422  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:45.621733  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:45.629662  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:45.997480  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:45.999806  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:45.999924  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:46.128736  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:46.496122  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:46.498230  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:46.508306  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:46.634830  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:46.995450  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:46.997718  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:47.000401  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:47.129881  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:47.495967  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:47.498987  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:47.499272  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:47.628511  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:47.996862  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:47.999469  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:47.999925  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:48.121919  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:48.129140  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:48.497489  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:48.499218  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:48.499696  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:48.629671  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:48.996826  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:49.000853  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:49.001783  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:49.128535  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:49.499691  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:49.505503  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:49.505786  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:49.628597  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:49.996165  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:49.998917  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:49.999301  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:50.128548  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:50.496890  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:50.498670  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:50.498797  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:50.621353  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:50.628655  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:50.997633  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:51.002107  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:51.002291  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:51.128914  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:51.495610  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:51.498054  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:51.499701  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:51.629213  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:51.995905  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:51.998058  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:51.999925  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:52.129227  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:52.606496  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:52.608617  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:52.610530  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:52.624013  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:52.630457  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:52.996657  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:53.000579  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:53.000745  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:53.129094  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:53.496074  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:53.497567  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:53.499371  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:53.628482  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:53.996479  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:53.999019  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:53.999396  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:39:54.131084  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:54.498917  131185 kapi.go:107] duration metric: took 39.503914997s to wait for kubernetes.io/minikube-addons=registry ...
	I0719 03:39:54.499518  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:54.499524  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:54.629097  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:54.996078  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:54.997917  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:55.121683  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:55.129405  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:55.497825  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:55.498703  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:55.631015  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:55.995654  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:56.006196  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:56.133999  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:56.495934  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:56.498001  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:56.630062  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:56.996219  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:56.998248  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:57.128960  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:57.495936  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:57.498082  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:57.621946  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:39:57.629217  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:57.996078  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:57.998057  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:58.128449  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:58.496549  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:58.497818  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:58.629601  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:58.997018  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:58.999171  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:59.128956  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:59.496467  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:59.498736  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:39:59.629229  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:39:59.995991  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:39:59.998203  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:00.123191  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:00.128382  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:00.496415  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:00.498256  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:00.629250  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:00.996601  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:00.998579  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:01.129193  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:01.496729  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:01.498475  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:01.628085  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:01.996936  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:01.999115  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:02.128981  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:02.497919  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:02.499140  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:02.622660  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:02.629229  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:02.996666  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:02.998600  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:03.129123  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:03.498991  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:03.499605  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:03.628987  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:03.996250  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:03.998744  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:04.130333  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:04.500548  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:04.503174  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:04.622917  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:04.629540  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:04.996637  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:04.998890  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:05.129351  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:05.496600  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:05.498034  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:05.629412  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:05.997947  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:05.998855  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:06.128912  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:06.495666  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:06.498035  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:06.628583  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:06.996605  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:06.998509  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:07.121736  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:07.129717  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:07.498303  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:07.499731  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:07.631390  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:07.996511  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:07.998834  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:08.129456  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:08.498362  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:08.499909  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:08.628497  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:08.999131  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:08.999408  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:09.123152  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:09.131008  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:09.496118  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:09.498190  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:09.629107  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:09.996187  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:09.999621  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:10.128967  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:10.496717  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:10.500918  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:10.631436  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:10.998653  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:11.001578  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:11.128632  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:11.499595  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:11.502813  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:11.621429  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:11.628528  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:11.996355  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:11.998897  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:12.129474  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:12.496844  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:12.499218  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:12.629812  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:13.439096  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:13.440656  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:13.440992  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:13.502624  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:13.503867  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:13.625307  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:13.628636  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:13.998349  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:13.999356  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:14.128617  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:14.497604  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:14.501184  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:14.630075  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:14.996449  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:14.999701  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:15.128778  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:15.497037  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:15.498912  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:15.629890  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:15.996409  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:15.999430  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:16.121856  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:16.129268  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:16.496140  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:16.499256  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:16.629025  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:17.000234  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:17.002129  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:17.129234  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:17.498529  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:17.505277  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:17.629838  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:18.000221  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:18.004879  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:18.128777  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:18.129221  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:18.499224  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:18.509520  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:18.628958  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:18.995924  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:18.997934  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:19.128927  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:19.497079  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:19.502622  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:19.629160  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:19.996198  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:19.998706  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:20.129054  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:20.496619  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:20.500459  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:20.621910  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:20.632489  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:20.996588  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:20.998281  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:21.128642  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:21.496299  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:21.498564  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:21.628868  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:21.997641  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:22.000992  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:22.128912  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:22.681709  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:22.682505  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:22.682761  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:22.700037  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:22.996849  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:23.002181  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:23.130699  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:23.496586  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:23.501885  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:23.629664  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:23.996192  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:23.998423  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:24.128656  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:24.496498  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:24.498902  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:24.628650  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:24.997161  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:24.998874  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:25.123326  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:25.128781  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:25.500763  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:25.500833  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:25.976489  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:25.996456  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:25.998577  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:26.128962  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:26.496261  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:26.498286  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:26.628421  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:26.996462  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:26.998833  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:27.124175  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:27.132059  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:27.495920  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:27.498149  131185 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:40:27.628948  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:27.998886  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:27.999787  131185 kapi.go:107] duration metric: took 1m13.005377788s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0719 03:40:28.128368  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:28.496753  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:28.629728  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:28.997538  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:29.130529  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:29.496515  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:29.621200  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:29.628613  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:29.996818  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:30.129164  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:30.495794  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:30.628316  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:30.996256  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:31.128902  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:31.496138  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:31.621433  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:31.628699  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:32.077518  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:32.130202  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:40:32.501153  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:32.628204  131185 kapi.go:107] duration metric: took 1m15.002444133s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 03:40:32.629636  131185 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-513705 cluster.
	I0719 03:40:32.630900  131185 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 03:40:32.632107  131185 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 03:40:32.996109  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:33.497433  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:33.621867  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:33.997689  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:34.496551  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:34.996766  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:35.496256  131185 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:40:35.996964  131185 kapi.go:107] duration metric: took 1m20.006161953s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 03:40:35.998662  131185 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, default-storageclass, inspektor-gadget, metrics-server, storage-provisioner, helm-tiller, cloud-spanner, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0719 03:40:35.999842  131185 addons.go:510] duration metric: took 1m29.247984052s for enable addons: enabled=[nvidia-device-plugin ingress-dns default-storageclass inspektor-gadget metrics-server storage-provisioner helm-tiller cloud-spanner yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0719 03:40:36.122575  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:38.620641  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:40.622622  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:42.623499  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:45.122774  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:47.621722  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:49.623069  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:52.121809  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:54.130175  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:56.621386  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:40:58.622444  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:41:01.122121  131185 pod_ready.go:102] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"False"
	I0719 03:41:02.121844  131185 pod_ready.go:92] pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace has status "Ready":"True"
	I0719 03:41:02.121866  131185 pod_ready.go:81] duration metric: took 1m45.506268286s for pod "metrics-server-c59844bb4-7fj9m" in "kube-system" namespace to be "Ready" ...
	I0719 03:41:02.121876  131185 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wxtdd" in "kube-system" namespace to be "Ready" ...
	I0719 03:41:02.125933  131185 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-wxtdd" in "kube-system" namespace has status "Ready":"True"
	I0719 03:41:02.125955  131185 pod_ready.go:81] duration metric: took 4.073329ms for pod "nvidia-device-plugin-daemonset-wxtdd" in "kube-system" namespace to be "Ready" ...
	I0719 03:41:02.125974  131185 pod_ready.go:38] duration metric: took 1m47.131436731s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 03:41:02.126005  131185 api_server.go:52] waiting for apiserver process to appear ...
	I0719 03:41:02.126055  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 03:41:02.126133  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 03:41:02.169797  131185 cri.go:89] found id: "95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078"
	I0719 03:41:02.169822  131185 cri.go:89] found id: ""
	I0719 03:41:02.169831  131185 logs.go:276] 1 containers: [95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078]
	I0719 03:41:02.169886  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:02.174296  131185 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 03:41:02.174364  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 03:41:02.219330  131185 cri.go:89] found id: "b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39"
	I0719 03:41:02.219355  131185 cri.go:89] found id: ""
	I0719 03:41:02.219364  131185 logs.go:276] 1 containers: [b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39]
	I0719 03:41:02.219423  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:02.223892  131185 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 03:41:02.223949  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 03:41:02.286581  131185 cri.go:89] found id: "9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868"
	I0719 03:41:02.286603  131185 cri.go:89] found id: ""
	I0719 03:41:02.286610  131185 logs.go:276] 1 containers: [9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868]
	I0719 03:41:02.286663  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:02.290832  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 03:41:02.290888  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 03:41:02.330109  131185 cri.go:89] found id: "e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04"
	I0719 03:41:02.330131  131185 cri.go:89] found id: ""
	I0719 03:41:02.330140  131185 logs.go:276] 1 containers: [e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04]
	I0719 03:41:02.330212  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:02.334087  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 03:41:02.334167  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 03:41:02.371184  131185 cri.go:89] found id: "bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b"
	I0719 03:41:02.371213  131185 cri.go:89] found id: ""
	I0719 03:41:02.371223  131185 logs.go:276] 1 containers: [bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b]
	I0719 03:41:02.371293  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:02.375044  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 03:41:02.375119  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 03:41:02.418934  131185 cri.go:89] found id: "cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed"
	I0719 03:41:02.418964  131185 cri.go:89] found id: ""
	I0719 03:41:02.418974  131185 logs.go:276] 1 containers: [cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed]
	I0719 03:41:02.419041  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:02.423447  131185 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 03:41:02.423504  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 03:41:02.465186  131185 cri.go:89] found id: ""
	I0719 03:41:02.465214  131185 logs.go:276] 0 containers: []
	W0719 03:41:02.465224  131185 logs.go:278] No container was found matching "kindnet"
	I0719 03:41:02.465234  131185 logs.go:123] Gathering logs for kube-controller-manager [cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed] ...
	I0719 03:41:02.465249  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed"
	I0719 03:41:02.533376  131185 logs.go:123] Gathering logs for container status ...
	I0719 03:41:02.533424  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 03:41:02.585862  131185 logs.go:123] Gathering logs for kubelet ...
	I0719 03:41:02.585904  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 03:41:02.629391  131185 logs.go:138] Found kubelet problem: Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.747037    1273 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:02.629559  131185 logs.go:138] Found kubelet problem: Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.747082    1273 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:02.629696  131185 logs.go:138] Found kubelet problem: Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.748375    1273 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:02.629844  131185 logs.go:138] Found kubelet problem: Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.748424    1273 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	I0719 03:41:02.663937  131185 logs.go:123] Gathering logs for describe nodes ...
	I0719 03:41:02.663979  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 03:41:02.786739  131185 logs.go:123] Gathering logs for etcd [b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39] ...
	I0719 03:41:02.786786  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39"
	I0719 03:41:02.856238  131185 logs.go:123] Gathering logs for kube-proxy [bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b] ...
	I0719 03:41:02.856295  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b"
	I0719 03:41:02.893136  131185 logs.go:123] Gathering logs for CRI-O ...
	I0719 03:41:02.893170  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 03:41:03.982228  131185 logs.go:123] Gathering logs for dmesg ...
	I0719 03:41:03.982278  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 03:41:04.000159  131185 logs.go:123] Gathering logs for kube-apiserver [95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078] ...
	I0719 03:41:04.000197  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078"
	I0719 03:41:04.046886  131185 logs.go:123] Gathering logs for coredns [9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868] ...
	I0719 03:41:04.046930  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868"
	I0719 03:41:04.082275  131185 logs.go:123] Gathering logs for kube-scheduler [e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04] ...
	I0719 03:41:04.082313  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04"
	I0719 03:41:04.124474  131185 out.go:304] Setting ErrFile to fd 2...
	I0719 03:41:04.124509  131185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 03:41:04.124586  131185 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 03:41:04.124603  131185 out.go:239]   Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.747037    1273 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	  Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.747037    1273 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:04.124615  131185 out.go:239]   Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.747082    1273 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	  Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.747082    1273 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:04.124628  131185 out.go:239]   Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.748375    1273 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	  Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.748375    1273 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:04.124641  131185 out.go:239]   Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.748424    1273 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	  Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.748424    1273 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	I0719 03:41:04.124654  131185 out.go:304] Setting ErrFile to fd 2...
	I0719 03:41:04.124666  131185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:41:14.125786  131185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 03:41:14.146703  131185 api_server.go:72] duration metric: took 2m7.394863427s to wait for apiserver process to appear ...
	I0719 03:41:14.146733  131185 api_server.go:88] waiting for apiserver healthz status ...
	I0719 03:41:14.146771  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 03:41:14.146836  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 03:41:14.182028  131185 cri.go:89] found id: "95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078"
	I0719 03:41:14.182053  131185 cri.go:89] found id: ""
	I0719 03:41:14.182064  131185 logs.go:276] 1 containers: [95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078]
	I0719 03:41:14.182130  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:14.186267  131185 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 03:41:14.186323  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 03:41:14.222071  131185 cri.go:89] found id: "b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39"
	I0719 03:41:14.222098  131185 cri.go:89] found id: ""
	I0719 03:41:14.222108  131185 logs.go:276] 1 containers: [b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39]
	I0719 03:41:14.222155  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:14.225950  131185 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 03:41:14.226002  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 03:41:14.267151  131185 cri.go:89] found id: "9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868"
	I0719 03:41:14.267176  131185 cri.go:89] found id: ""
	I0719 03:41:14.267185  131185 logs.go:276] 1 containers: [9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868]
	I0719 03:41:14.267243  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:14.271053  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 03:41:14.271112  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 03:41:14.313014  131185 cri.go:89] found id: "e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04"
	I0719 03:41:14.313038  131185 cri.go:89] found id: ""
	I0719 03:41:14.313046  131185 logs.go:276] 1 containers: [e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04]
	I0719 03:41:14.313132  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:14.317022  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 03:41:14.317107  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 03:41:14.360412  131185 cri.go:89] found id: "bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b"
	I0719 03:41:14.360435  131185 cri.go:89] found id: ""
	I0719 03:41:14.360443  131185 logs.go:276] 1 containers: [bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b]
	I0719 03:41:14.360499  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:14.364276  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 03:41:14.364345  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 03:41:14.405506  131185 cri.go:89] found id: "cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed"
	I0719 03:41:14.405525  131185 cri.go:89] found id: ""
	I0719 03:41:14.405535  131185 logs.go:276] 1 containers: [cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed]
	I0719 03:41:14.405590  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:14.410086  131185 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 03:41:14.410199  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 03:41:14.453123  131185 cri.go:89] found id: ""
	I0719 03:41:14.453151  131185 logs.go:276] 0 containers: []
	W0719 03:41:14.453161  131185 logs.go:278] No container was found matching "kindnet"
	I0719 03:41:14.453173  131185 logs.go:123] Gathering logs for CRI-O ...
	I0719 03:41:14.453187  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 03:41:15.218141  131185 logs.go:123] Gathering logs for dmesg ...
	I0719 03:41:15.218185  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 03:41:15.232691  131185 logs.go:123] Gathering logs for coredns [9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868] ...
	I0719 03:41:15.232725  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868"
	I0719 03:41:15.272913  131185 logs.go:123] Gathering logs for kube-scheduler [e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04] ...
	I0719 03:41:15.272942  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04"
	I0719 03:41:15.316980  131185 logs.go:123] Gathering logs for kube-proxy [bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b] ...
	I0719 03:41:15.317056  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b"
	I0719 03:41:15.359254  131185 logs.go:123] Gathering logs for kube-controller-manager [cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed] ...
	I0719 03:41:15.359290  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed"
	I0719 03:41:15.419919  131185 logs.go:123] Gathering logs for kubelet ...
	I0719 03:41:15.419955  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0719 03:41:15.462913  131185 logs.go:138] Found kubelet problem: Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.747037    1273 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:15.463076  131185 logs.go:138] Found kubelet problem: Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.747082    1273 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:15.463218  131185 logs.go:138] Found kubelet problem: Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.748375    1273 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:15.463368  131185 logs.go:138] Found kubelet problem: Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.748424    1273 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	I0719 03:41:15.496980  131185 logs.go:123] Gathering logs for describe nodes ...
	I0719 03:41:15.497018  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0719 03:41:15.612776  131185 logs.go:123] Gathering logs for kube-apiserver [95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078] ...
	I0719 03:41:15.612805  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078"
	I0719 03:41:15.659327  131185 logs.go:123] Gathering logs for etcd [b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39] ...
	I0719 03:41:15.659362  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39"
	I0719 03:41:15.722749  131185 logs.go:123] Gathering logs for container status ...
	I0719 03:41:15.722790  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 03:41:15.775612  131185 out.go:304] Setting ErrFile to fd 2...
	I0719 03:41:15.775639  131185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 03:41:15.775694  131185 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0719 03:41:15.775704  131185 out.go:239]   Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.747037    1273 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	  Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.747037    1273 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:15.775710  131185 out.go:239]   Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.747082    1273 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	  Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.747082    1273 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:15.775718  131185 out.go:239]   Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.748375    1273 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	  Jul 19 03:39:06 addons-513705 kubelet[1273]: W0719 03:39:06.748375    1273 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	W0719 03:41:15.775727  131185 out.go:239]   Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.748424    1273 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	  Jul 19 03:39:06 addons-513705 kubelet[1273]: E0719 03:39:06.748424    1273 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-513705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-513705' and this object
	I0719 03:41:15.775734  131185 out.go:304] Setting ErrFile to fd 2...
	I0719 03:41:15.775742  131185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:41:25.776718  131185 api_server.go:253] Checking apiserver healthz at https://192.168.39.209:8443/healthz ...
	I0719 03:41:25.781048  131185 api_server.go:279] https://192.168.39.209:8443/healthz returned 200:
	ok
	I0719 03:41:25.781992  131185 api_server.go:141] control plane version: v1.30.3
	I0719 03:41:25.782015  131185 api_server.go:131] duration metric: took 11.635275804s to wait for apiserver health ...
	I0719 03:41:25.782024  131185 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 03:41:25.782045  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 03:41:25.782100  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 03:41:25.820253  131185 cri.go:89] found id: "95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078"
	I0719 03:41:25.820280  131185 cri.go:89] found id: ""
	I0719 03:41:25.820290  131185 logs.go:276] 1 containers: [95760276bef17d5f9c55ecd7fd66112b6ed2edf93fea6c46a6021c3890211078]
	I0719 03:41:25.820347  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:25.824208  131185 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 03:41:25.824266  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 03:41:25.863268  131185 cri.go:89] found id: "b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39"
	I0719 03:41:25.863293  131185 cri.go:89] found id: ""
	I0719 03:41:25.863303  131185 logs.go:276] 1 containers: [b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39]
	I0719 03:41:25.863358  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:25.866976  131185 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 03:41:25.867037  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 03:41:25.902114  131185 cri.go:89] found id: "9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868"
	I0719 03:41:25.902146  131185 cri.go:89] found id: ""
	I0719 03:41:25.902156  131185 logs.go:276] 1 containers: [9dba773f121c5d7d9649a37c5348c9ef928eab33489a55cc029ee75ce273b868]
	I0719 03:41:25.902205  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:25.906151  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 03:41:25.906209  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 03:41:25.940105  131185 cri.go:89] found id: "e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04"
	I0719 03:41:25.940133  131185 cri.go:89] found id: ""
	I0719 03:41:25.940142  131185 logs.go:276] 1 containers: [e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04]
	I0719 03:41:25.940189  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:25.943858  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 03:41:25.943915  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 03:41:25.981656  131185 cri.go:89] found id: "bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b"
	I0719 03:41:25.981686  131185 cri.go:89] found id: ""
	I0719 03:41:25.981698  131185 logs.go:276] 1 containers: [bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b]
	I0719 03:41:25.981759  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:25.985791  131185 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 03:41:25.985849  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 03:41:26.029526  131185 cri.go:89] found id: "cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed"
	I0719 03:41:26.029553  131185 cri.go:89] found id: ""
	I0719 03:41:26.029563  131185 logs.go:276] 1 containers: [cfceedc2cf8a6805073f755e69da74d49eb61b44c1ddfe1918faff75050768ed]
	I0719 03:41:26.029622  131185 ssh_runner.go:195] Run: which crictl
	I0719 03:41:26.033313  131185 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 03:41:26.033384  131185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 03:41:26.076355  131185 cri.go:89] found id: ""
	I0719 03:41:26.076387  131185 logs.go:276] 0 containers: []
	W0719 03:41:26.076396  131185 logs.go:278] No container was found matching "kindnet"
	I0719 03:41:26.076405  131185 logs.go:123] Gathering logs for dmesg ...
	I0719 03:41:26.076419  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0719 03:41:26.092080  131185 logs.go:123] Gathering logs for etcd [b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39] ...
	I0719 03:41:26.092114  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b96e41932283fbf53dc2ef390152968f8b7d4529bfcd74b9593d6a409f3e8f39"
	I0719 03:41:26.151220  131185 logs.go:123] Gathering logs for kube-scheduler [e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04] ...
	I0719 03:41:26.151266  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e38ec0a2fdee8f4f0f5a25744d6da3298fb2cd41eaf4ed4ae035136da2b41a04"
	I0719 03:41:26.192469  131185 logs.go:123] Gathering logs for kube-proxy [bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b] ...
	I0719 03:41:26.192500  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb3fb21dbd0f7e49c7d00c77997198d4c02c0687c6970808b202a8eeec2c1f5b"
	I0719 03:41:26.231973  131185 logs.go:123] Gathering logs for CRI-O ...
	I0719 03:41:26.232002  131185 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-513705 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 node stop m02 -v=7 --alsologtostderr
E0719 04:29:20.678279  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.470631653s)

                                                
                                                
-- stdout --
	* Stopping node "ha-925161-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:28:10.220639  149585 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:28:10.220824  149585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:28:10.220835  149585 out.go:304] Setting ErrFile to fd 2...
	I0719 04:28:10.220841  149585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:28:10.221052  149585 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:28:10.221356  149585 mustload.go:65] Loading cluster: ha-925161
	I0719 04:28:10.221754  149585 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:28:10.221772  149585 stop.go:39] StopHost: ha-925161-m02
	I0719 04:28:10.222157  149585 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:28:10.222213  149585 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:28:10.238495  149585 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32835
	I0719 04:28:10.239016  149585 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:28:10.239588  149585 main.go:141] libmachine: Using API Version  1
	I0719 04:28:10.239612  149585 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:28:10.240022  149585 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:28:10.242530  149585 out.go:177] * Stopping node "ha-925161-m02"  ...
	I0719 04:28:10.243746  149585 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 04:28:10.243786  149585 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:28:10.244026  149585 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 04:28:10.244053  149585 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:28:10.246788  149585 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:28:10.247182  149585 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:28:10.247220  149585 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:28:10.247394  149585 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:28:10.247571  149585 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:28:10.247746  149585 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:28:10.247886  149585 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:28:10.338306  149585 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 04:28:10.391918  149585 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 04:28:10.445376  149585 main.go:141] libmachine: Stopping "ha-925161-m02"...
	I0719 04:28:10.445403  149585 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:28:10.446941  149585 main.go:141] libmachine: (ha-925161-m02) Calling .Stop
	I0719 04:28:10.450318  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 0/120
	I0719 04:28:11.451716  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 1/120
	I0719 04:28:12.453188  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 2/120
	I0719 04:28:13.454555  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 3/120
	I0719 04:28:14.455842  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 4/120
	I0719 04:28:15.457845  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 5/120
	I0719 04:28:16.459646  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 6/120
	I0719 04:28:17.461356  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 7/120
	I0719 04:28:18.463747  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 8/120
	I0719 04:28:19.465527  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 9/120
	I0719 04:28:20.467417  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 10/120
	I0719 04:28:21.469011  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 11/120
	I0719 04:28:22.470732  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 12/120
	I0719 04:28:23.472546  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 13/120
	I0719 04:28:24.473886  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 14/120
	I0719 04:28:25.475725  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 15/120
	I0719 04:28:26.477280  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 16/120
	I0719 04:28:27.479426  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 17/120
	I0719 04:28:28.480765  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 18/120
	I0719 04:28:29.482105  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 19/120
	I0719 04:28:30.483308  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 20/120
	I0719 04:28:31.484881  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 21/120
	I0719 04:28:32.487237  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 22/120
	I0719 04:28:33.488959  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 23/120
	I0719 04:28:34.490381  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 24/120
	I0719 04:28:35.492386  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 25/120
	I0719 04:28:36.494520  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 26/120
	I0719 04:28:37.496816  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 27/120
	I0719 04:28:38.498322  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 28/120
	I0719 04:28:39.499681  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 29/120
	I0719 04:28:40.501726  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 30/120
	I0719 04:28:41.503102  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 31/120
	I0719 04:28:42.504578  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 32/120
	I0719 04:28:43.505807  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 33/120
	I0719 04:28:44.507168  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 34/120
	I0719 04:28:45.508979  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 35/120
	I0719 04:28:46.511214  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 36/120
	I0719 04:28:47.512795  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 37/120
	I0719 04:28:48.514172  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 38/120
	I0719 04:28:49.515499  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 39/120
	I0719 04:28:50.517609  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 40/120
	I0719 04:28:51.519005  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 41/120
	I0719 04:28:52.520253  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 42/120
	I0719 04:28:53.521666  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 43/120
	I0719 04:28:54.523465  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 44/120
	I0719 04:28:55.525420  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 45/120
	I0719 04:28:56.526836  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 46/120
	I0719 04:28:57.528071  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 47/120
	I0719 04:28:58.529373  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 48/120
	I0719 04:28:59.530863  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 49/120
	I0719 04:29:00.532784  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 50/120
	I0719 04:29:01.534397  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 51/120
	I0719 04:29:02.536209  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 52/120
	I0719 04:29:03.537519  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 53/120
	I0719 04:29:04.539462  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 54/120
	I0719 04:29:05.541246  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 55/120
	I0719 04:29:06.543516  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 56/120
	I0719 04:29:07.545191  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 57/120
	I0719 04:29:08.546529  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 58/120
	I0719 04:29:09.547765  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 59/120
	I0719 04:29:10.549759  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 60/120
	I0719 04:29:11.551219  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 61/120
	I0719 04:29:12.552826  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 62/120
	I0719 04:29:13.554604  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 63/120
	I0719 04:29:14.555915  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 64/120
	I0719 04:29:15.557848  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 65/120
	I0719 04:29:16.559543  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 66/120
	I0719 04:29:17.560977  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 67/120
	I0719 04:29:18.563360  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 68/120
	I0719 04:29:19.565166  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 69/120
	I0719 04:29:20.566971  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 70/120
	I0719 04:29:21.568273  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 71/120
	I0719 04:29:22.570047  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 72/120
	I0719 04:29:23.571519  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 73/120
	I0719 04:29:24.572893  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 74/120
	I0719 04:29:25.574682  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 75/120
	I0719 04:29:26.576852  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 76/120
	I0719 04:29:27.578070  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 77/120
	I0719 04:29:28.579406  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 78/120
	I0719 04:29:29.581353  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 79/120
	I0719 04:29:30.583467  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 80/120
	I0719 04:29:31.584913  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 81/120
	I0719 04:29:32.586083  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 82/120
	I0719 04:29:33.587717  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 83/120
	I0719 04:29:34.589149  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 84/120
	I0719 04:29:35.590929  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 85/120
	I0719 04:29:36.592406  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 86/120
	I0719 04:29:37.593690  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 87/120
	I0719 04:29:38.594971  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 88/120
	I0719 04:29:39.596332  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 89/120
	I0719 04:29:40.598668  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 90/120
	I0719 04:29:41.599853  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 91/120
	I0719 04:29:42.601179  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 92/120
	I0719 04:29:43.602502  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 93/120
	I0719 04:29:44.604318  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 94/120
	I0719 04:29:45.606105  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 95/120
	I0719 04:29:46.607733  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 96/120
	I0719 04:29:47.609350  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 97/120
	I0719 04:29:48.611784  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 98/120
	I0719 04:29:49.613617  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 99/120
	I0719 04:29:50.615708  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 100/120
	I0719 04:29:51.617133  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 101/120
	I0719 04:29:52.618657  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 102/120
	I0719 04:29:53.620524  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 103/120
	I0719 04:29:54.621956  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 104/120
	I0719 04:29:55.623986  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 105/120
	I0719 04:29:56.625572  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 106/120
	I0719 04:29:57.627476  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 107/120
	I0719 04:29:58.629239  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 108/120
	I0719 04:29:59.631458  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 109/120
	I0719 04:30:00.633643  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 110/120
	I0719 04:30:01.635549  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 111/120
	I0719 04:30:02.636732  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 112/120
	I0719 04:30:03.638074  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 113/120
	I0719 04:30:04.639695  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 114/120
	I0719 04:30:05.641813  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 115/120
	I0719 04:30:06.643399  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 116/120
	I0719 04:30:07.644623  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 117/120
	I0719 04:30:08.646041  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 118/120
	I0719 04:30:09.647462  149585 main.go:141] libmachine: (ha-925161-m02) Waiting for machine to stop 119/120
	I0719 04:30:10.648832  149585 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 04:30:10.649037  149585 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-925161 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 3 (19.054160777s)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-925161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:30:10.694380  149995 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:30:10.694795  149995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:10.694812  149995 out.go:304] Setting ErrFile to fd 2...
	I0719 04:30:10.694818  149995 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:10.695335  149995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:30:10.695644  149995 out.go:298] Setting JSON to false
	I0719 04:30:10.695797  149995 notify.go:220] Checking for updates...
	I0719 04:30:10.695812  149995 mustload.go:65] Loading cluster: ha-925161
	I0719 04:30:10.696434  149995 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:30:10.696455  149995 status.go:255] checking status of ha-925161 ...
	I0719 04:30:10.696829  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:10.696871  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:10.712523  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34215
	I0719 04:30:10.712996  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:10.713630  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:10.713654  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:10.713982  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:10.714181  149995 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:30:10.715732  149995 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:30:10.715751  149995 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:10.716018  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:10.716060  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:10.731234  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38583
	I0719 04:30:10.731655  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:10.732134  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:10.732165  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:10.732608  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:10.732779  149995 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:30:10.736043  149995 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:10.736555  149995 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:10.736587  149995 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:10.736730  149995 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:10.737055  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:10.737121  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:10.753144  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46437
	I0719 04:30:10.753623  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:10.754167  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:10.754204  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:10.754615  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:10.754842  149995 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:30:10.755079  149995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:10.755125  149995 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:30:10.758473  149995 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:10.758973  149995 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:10.759000  149995 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:10.759159  149995 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:30:10.759333  149995 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:30:10.759527  149995 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:30:10.759680  149995 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:30:10.856447  149995 ssh_runner.go:195] Run: systemctl --version
	I0719 04:30:10.863440  149995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:10.879582  149995 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:10.879611  149995 api_server.go:166] Checking apiserver status ...
	I0719 04:30:10.879643  149995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:10.895312  149995 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0719 04:30:10.906272  149995 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:10.906333  149995 ssh_runner.go:195] Run: ls
	I0719 04:30:10.910512  149995 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:10.915007  149995 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:10.915031  149995 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:30:10.915041  149995 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:10.915062  149995 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:30:10.915405  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:10.915455  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:10.931718  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39123
	I0719 04:30:10.932143  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:10.932607  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:10.932628  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:10.932913  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:10.933101  149995 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:30:10.934745  149995 status.go:330] ha-925161-m02 host status = "Running" (err=<nil>)
	I0719 04:30:10.934763  149995 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:10.935031  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:10.935065  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:10.951805  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36657
	I0719 04:30:10.952432  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:10.952990  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:10.953011  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:10.953479  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:10.953708  149995 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:30:10.956395  149995 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:10.956828  149995 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:10.956856  149995 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:10.957142  149995 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:10.957547  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:10.957590  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:10.973140  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41445
	I0719 04:30:10.973552  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:10.974125  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:10.974157  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:10.974536  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:10.974731  149995 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:30:10.974945  149995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:10.974973  149995 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:30:10.978000  149995 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:10.978454  149995 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:10.978469  149995 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:10.978658  149995 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:30:10.978865  149995 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:30:10.979058  149995 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:30:10.979247  149995 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	W0719 04:30:29.345323  149995 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0719 04:30:29.345458  149995 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0719 04:30:29.345486  149995 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:29.345498  149995 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 04:30:29.345527  149995 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:29.345538  149995 status.go:255] checking status of ha-925161-m03 ...
	I0719 04:30:29.345905  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:29.345974  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:29.360938  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38591
	I0719 04:30:29.361455  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:29.361982  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:29.362009  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:29.362368  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:29.362556  149995 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:30:29.364382  149995 status.go:330] ha-925161-m03 host status = "Running" (err=<nil>)
	I0719 04:30:29.364398  149995 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:29.364753  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:29.364809  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:29.379943  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42415
	I0719 04:30:29.380356  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:29.380850  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:29.380869  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:29.381234  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:29.381427  149995 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:30:29.384216  149995 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:29.384669  149995 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:29.384698  149995 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:29.384823  149995 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:29.385207  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:29.385251  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:29.399567  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42643
	I0719 04:30:29.399936  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:29.400374  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:29.400392  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:29.400700  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:29.400941  149995 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:30:29.401165  149995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:29.401194  149995 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:30:29.403748  149995 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:29.404175  149995 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:29.404201  149995 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:29.404347  149995 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:30:29.404504  149995 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:30:29.404622  149995 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:30:29.404749  149995 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:30:29.485689  149995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:29.503279  149995 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:29.503320  149995 api_server.go:166] Checking apiserver status ...
	I0719 04:30:29.503365  149995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:29.518711  149995 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	W0719 04:30:29.527649  149995 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:29.527710  149995 ssh_runner.go:195] Run: ls
	I0719 04:30:29.531670  149995 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:29.537700  149995 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:29.537725  149995 status.go:422] ha-925161-m03 apiserver status = Running (err=<nil>)
	I0719 04:30:29.537734  149995 status.go:257] ha-925161-m03 status: &{Name:ha-925161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:29.537749  149995 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:30:29.538027  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:29.538051  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:29.552956  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0719 04:30:29.553367  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:29.553762  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:29.553780  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:29.554160  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:29.554322  149995 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:30:29.555909  149995 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:30:29.555937  149995 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:29.556223  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:29.556244  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:29.571140  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46301
	I0719 04:30:29.571610  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:29.572065  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:29.572084  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:29.572412  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:29.572644  149995 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:30:29.575366  149995 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:29.575814  149995 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:29.575848  149995 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:29.575999  149995 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:29.576310  149995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:29.576334  149995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:29.591871  149995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34285
	I0719 04:30:29.592275  149995 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:29.592754  149995 main.go:141] libmachine: Using API Version  1
	I0719 04:30:29.592777  149995 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:29.593148  149995 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:29.593370  149995 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:30:29.593561  149995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:29.593584  149995 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:30:29.596435  149995 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:29.596794  149995 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:29.596813  149995 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:29.596977  149995 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:30:29.597232  149995 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:30:29.597379  149995 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:30:29.597567  149995 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:30:29.685535  149995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:29.701873  149995 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-925161 -n ha-925161
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-925161 logs -n 25: (1.328786153s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3159028946/001/cp-test_ha-925161-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161:/home/docker/cp-test_ha-925161-m03_ha-925161.txt                       |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161 sudo cat                                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161.txt                                 |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m02:/home/docker/cp-test_ha-925161-m03_ha-925161-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m02 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04:/home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m04 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp testdata/cp-test.txt                                                | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3159028946/001/cp-test_ha-925161-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161:/home/docker/cp-test_ha-925161-m04_ha-925161.txt                       |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161 sudo cat                                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161.txt                                 |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m02:/home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m02 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03:/home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m03 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-925161 node stop m02 -v=7                                                     | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:22:29
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:22:29.779814  145142 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:22:29.780075  145142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:22:29.780085  145142 out.go:304] Setting ErrFile to fd 2...
	I0719 04:22:29.780090  145142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:22:29.780324  145142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:22:29.780936  145142 out.go:298] Setting JSON to false
	I0719 04:22:29.781879  145142 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7493,"bootTime":1721355457,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 04:22:29.781936  145142 start.go:139] virtualization: kvm guest
	I0719 04:22:29.784151  145142 out.go:177] * [ha-925161] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 04:22:29.785471  145142 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:22:29.785479  145142 notify.go:220] Checking for updates...
	I0719 04:22:29.787820  145142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:22:29.788891  145142 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:22:29.789962  145142 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:22:29.791120  145142 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 04:22:29.792216  145142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:22:29.793437  145142 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:22:29.827725  145142 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 04:22:29.828880  145142 start.go:297] selected driver: kvm2
	I0719 04:22:29.828895  145142 start.go:901] validating driver "kvm2" against <nil>
	I0719 04:22:29.828906  145142 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:22:29.829651  145142 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:22:29.829720  145142 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 04:22:29.844753  145142 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 04:22:29.844844  145142 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 04:22:29.845270  145142 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:22:29.845527  145142 cni.go:84] Creating CNI manager for ""
	I0719 04:22:29.845544  145142 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 04:22:29.845554  145142 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 04:22:29.845637  145142 start.go:340] cluster config:
	{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0719 04:22:29.845736  145142 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:22:29.847611  145142 out.go:177] * Starting "ha-925161" primary control-plane node in "ha-925161" cluster
	I0719 04:22:29.848780  145142 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:22:29.848818  145142 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 04:22:29.848832  145142 cache.go:56] Caching tarball of preloaded images
	I0719 04:22:29.848919  145142 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:22:29.848933  145142 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:22:29.849365  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:22:29.849395  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json: {Name:mk42287f9f8916c94b7b3c67930dafa0c3559cb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:29.849568  145142 start.go:360] acquireMachinesLock for ha-925161: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:22:29.849609  145142 start.go:364] duration metric: took 21.401µs to acquireMachinesLock for "ha-925161"
	I0719 04:22:29.849633  145142 start.go:93] Provisioning new machine with config: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:22:29.849725  145142 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 04:22:29.851249  145142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:22:29.851419  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:22:29.851451  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:22:29.865955  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45139
	I0719 04:22:29.866418  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:22:29.867045  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:22:29.867066  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:22:29.867383  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:22:29.867589  145142 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:22:29.867778  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:29.867946  145142 start.go:159] libmachine.API.Create for "ha-925161" (driver="kvm2")
	I0719 04:22:29.867975  145142 client.go:168] LocalClient.Create starting
	I0719 04:22:29.868010  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem
	I0719 04:22:29.868110  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:22:29.868132  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:22:29.868194  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem
	I0719 04:22:29.868220  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:22:29.868234  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:22:29.868250  145142 main.go:141] libmachine: Running pre-create checks...
	I0719 04:22:29.868258  145142 main.go:141] libmachine: (ha-925161) Calling .PreCreateCheck
	I0719 04:22:29.868687  145142 main.go:141] libmachine: (ha-925161) Calling .GetConfigRaw
	I0719 04:22:29.869098  145142 main.go:141] libmachine: Creating machine...
	I0719 04:22:29.869118  145142 main.go:141] libmachine: (ha-925161) Calling .Create
	I0719 04:22:29.869252  145142 main.go:141] libmachine: (ha-925161) Creating KVM machine...
	I0719 04:22:29.870412  145142 main.go:141] libmachine: (ha-925161) DBG | found existing default KVM network
	I0719 04:22:29.871104  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:29.870959  145164 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0719 04:22:29.871125  145142 main.go:141] libmachine: (ha-925161) DBG | created network xml: 
	I0719 04:22:29.871137  145142 main.go:141] libmachine: (ha-925161) DBG | <network>
	I0719 04:22:29.871146  145142 main.go:141] libmachine: (ha-925161) DBG |   <name>mk-ha-925161</name>
	I0719 04:22:29.871155  145142 main.go:141] libmachine: (ha-925161) DBG |   <dns enable='no'/>
	I0719 04:22:29.871165  145142 main.go:141] libmachine: (ha-925161) DBG |   
	I0719 04:22:29.871177  145142 main.go:141] libmachine: (ha-925161) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 04:22:29.871188  145142 main.go:141] libmachine: (ha-925161) DBG |     <dhcp>
	I0719 04:22:29.871278  145142 main.go:141] libmachine: (ha-925161) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 04:22:29.871317  145142 main.go:141] libmachine: (ha-925161) DBG |     </dhcp>
	I0719 04:22:29.871343  145142 main.go:141] libmachine: (ha-925161) DBG |   </ip>
	I0719 04:22:29.871363  145142 main.go:141] libmachine: (ha-925161) DBG |   
	I0719 04:22:29.871375  145142 main.go:141] libmachine: (ha-925161) DBG | </network>
	I0719 04:22:29.871383  145142 main.go:141] libmachine: (ha-925161) DBG | 
	I0719 04:22:29.875939  145142 main.go:141] libmachine: (ha-925161) DBG | trying to create private KVM network mk-ha-925161 192.168.39.0/24...
	I0719 04:22:29.944824  145142 main.go:141] libmachine: (ha-925161) DBG | private KVM network mk-ha-925161 192.168.39.0/24 created
	I0719 04:22:29.944873  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:29.944745  145164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:22:29.944887  145142 main.go:141] libmachine: (ha-925161) Setting up store path in /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161 ...
	I0719 04:22:29.944907  145142 main.go:141] libmachine: (ha-925161) Building disk image from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 04:22:29.944925  145142 main.go:141] libmachine: (ha-925161) Downloading /home/jenkins/minikube-integration/19302-122995/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:22:30.192232  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:30.192113  145164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa...
	I0719 04:22:30.420050  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:30.419853  145164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/ha-925161.rawdisk...
	I0719 04:22:30.420096  145142 main.go:141] libmachine: (ha-925161) DBG | Writing magic tar header
	I0719 04:22:30.420115  145142 main.go:141] libmachine: (ha-925161) DBG | Writing SSH key tar header
	I0719 04:22:30.420129  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:30.420040  145164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161 ...
	I0719 04:22:30.420151  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161
	I0719 04:22:30.420301  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161 (perms=drwx------)
	I0719 04:22:30.420339  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines
	I0719 04:22:30.420356  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines (perms=drwxr-xr-x)
	I0719 04:22:30.420404  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:22:30.420431  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube (perms=drwxr-xr-x)
	I0719 04:22:30.420444  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995
	I0719 04:22:30.420459  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 04:22:30.420469  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins
	I0719 04:22:30.420478  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995 (perms=drwxrwxr-x)
	I0719 04:22:30.420491  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 04:22:30.420499  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 04:22:30.420513  145142 main.go:141] libmachine: (ha-925161) Creating domain...
	I0719 04:22:30.420535  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home
	I0719 04:22:30.420546  145142 main.go:141] libmachine: (ha-925161) DBG | Skipping /home - not owner
	I0719 04:22:30.421634  145142 main.go:141] libmachine: (ha-925161) define libvirt domain using xml: 
	I0719 04:22:30.421652  145142 main.go:141] libmachine: (ha-925161) <domain type='kvm'>
	I0719 04:22:30.421661  145142 main.go:141] libmachine: (ha-925161)   <name>ha-925161</name>
	I0719 04:22:30.421669  145142 main.go:141] libmachine: (ha-925161)   <memory unit='MiB'>2200</memory>
	I0719 04:22:30.421681  145142 main.go:141] libmachine: (ha-925161)   <vcpu>2</vcpu>
	I0719 04:22:30.421685  145142 main.go:141] libmachine: (ha-925161)   <features>
	I0719 04:22:30.421690  145142 main.go:141] libmachine: (ha-925161)     <acpi/>
	I0719 04:22:30.421694  145142 main.go:141] libmachine: (ha-925161)     <apic/>
	I0719 04:22:30.421699  145142 main.go:141] libmachine: (ha-925161)     <pae/>
	I0719 04:22:30.421708  145142 main.go:141] libmachine: (ha-925161)     
	I0719 04:22:30.421712  145142 main.go:141] libmachine: (ha-925161)   </features>
	I0719 04:22:30.421718  145142 main.go:141] libmachine: (ha-925161)   <cpu mode='host-passthrough'>
	I0719 04:22:30.421726  145142 main.go:141] libmachine: (ha-925161)   
	I0719 04:22:30.421732  145142 main.go:141] libmachine: (ha-925161)   </cpu>
	I0719 04:22:30.421740  145142 main.go:141] libmachine: (ha-925161)   <os>
	I0719 04:22:30.421754  145142 main.go:141] libmachine: (ha-925161)     <type>hvm</type>
	I0719 04:22:30.421770  145142 main.go:141] libmachine: (ha-925161)     <boot dev='cdrom'/>
	I0719 04:22:30.421783  145142 main.go:141] libmachine: (ha-925161)     <boot dev='hd'/>
	I0719 04:22:30.421791  145142 main.go:141] libmachine: (ha-925161)     <bootmenu enable='no'/>
	I0719 04:22:30.421796  145142 main.go:141] libmachine: (ha-925161)   </os>
	I0719 04:22:30.421802  145142 main.go:141] libmachine: (ha-925161)   <devices>
	I0719 04:22:30.421807  145142 main.go:141] libmachine: (ha-925161)     <disk type='file' device='cdrom'>
	I0719 04:22:30.421819  145142 main.go:141] libmachine: (ha-925161)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/boot2docker.iso'/>
	I0719 04:22:30.421831  145142 main.go:141] libmachine: (ha-925161)       <target dev='hdc' bus='scsi'/>
	I0719 04:22:30.421846  145142 main.go:141] libmachine: (ha-925161)       <readonly/>
	I0719 04:22:30.421861  145142 main.go:141] libmachine: (ha-925161)     </disk>
	I0719 04:22:30.421870  145142 main.go:141] libmachine: (ha-925161)     <disk type='file' device='disk'>
	I0719 04:22:30.421878  145142 main.go:141] libmachine: (ha-925161)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 04:22:30.421889  145142 main.go:141] libmachine: (ha-925161)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/ha-925161.rawdisk'/>
	I0719 04:22:30.421896  145142 main.go:141] libmachine: (ha-925161)       <target dev='hda' bus='virtio'/>
	I0719 04:22:30.421901  145142 main.go:141] libmachine: (ha-925161)     </disk>
	I0719 04:22:30.421908  145142 main.go:141] libmachine: (ha-925161)     <interface type='network'>
	I0719 04:22:30.421914  145142 main.go:141] libmachine: (ha-925161)       <source network='mk-ha-925161'/>
	I0719 04:22:30.421924  145142 main.go:141] libmachine: (ha-925161)       <model type='virtio'/>
	I0719 04:22:30.421951  145142 main.go:141] libmachine: (ha-925161)     </interface>
	I0719 04:22:30.421974  145142 main.go:141] libmachine: (ha-925161)     <interface type='network'>
	I0719 04:22:30.421987  145142 main.go:141] libmachine: (ha-925161)       <source network='default'/>
	I0719 04:22:30.421997  145142 main.go:141] libmachine: (ha-925161)       <model type='virtio'/>
	I0719 04:22:30.422009  145142 main.go:141] libmachine: (ha-925161)     </interface>
	I0719 04:22:30.422018  145142 main.go:141] libmachine: (ha-925161)     <serial type='pty'>
	I0719 04:22:30.422027  145142 main.go:141] libmachine: (ha-925161)       <target port='0'/>
	I0719 04:22:30.422034  145142 main.go:141] libmachine: (ha-925161)     </serial>
	I0719 04:22:30.422049  145142 main.go:141] libmachine: (ha-925161)     <console type='pty'>
	I0719 04:22:30.422066  145142 main.go:141] libmachine: (ha-925161)       <target type='serial' port='0'/>
	I0719 04:22:30.422078  145142 main.go:141] libmachine: (ha-925161)     </console>
	I0719 04:22:30.422089  145142 main.go:141] libmachine: (ha-925161)     <rng model='virtio'>
	I0719 04:22:30.422101  145142 main.go:141] libmachine: (ha-925161)       <backend model='random'>/dev/random</backend>
	I0719 04:22:30.422110  145142 main.go:141] libmachine: (ha-925161)     </rng>
	I0719 04:22:30.422119  145142 main.go:141] libmachine: (ha-925161)     
	I0719 04:22:30.422128  145142 main.go:141] libmachine: (ha-925161)     
	I0719 04:22:30.422136  145142 main.go:141] libmachine: (ha-925161)   </devices>
	I0719 04:22:30.422149  145142 main.go:141] libmachine: (ha-925161) </domain>
	I0719 04:22:30.422162  145142 main.go:141] libmachine: (ha-925161) 
	I0719 04:22:30.426564  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:70:7c:b0 in network default
	I0719 04:22:30.427164  145142 main.go:141] libmachine: (ha-925161) Ensuring networks are active...
	I0719 04:22:30.427178  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:30.427848  145142 main.go:141] libmachine: (ha-925161) Ensuring network default is active
	I0719 04:22:30.428157  145142 main.go:141] libmachine: (ha-925161) Ensuring network mk-ha-925161 is active
	I0719 04:22:30.428726  145142 main.go:141] libmachine: (ha-925161) Getting domain xml...
	I0719 04:22:30.429504  145142 main.go:141] libmachine: (ha-925161) Creating domain...
	I0719 04:22:31.588719  145142 main.go:141] libmachine: (ha-925161) Waiting to get IP...
	I0719 04:22:31.589394  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:31.589737  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:31.589777  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:31.589730  145164 retry.go:31] will retry after 249.411961ms: waiting for machine to come up
	I0719 04:22:31.841250  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:31.841746  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:31.841771  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:31.841699  145164 retry.go:31] will retry after 263.722178ms: waiting for machine to come up
	I0719 04:22:32.107140  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:32.107503  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:32.107526  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:32.107459  145164 retry.go:31] will retry after 367.963801ms: waiting for machine to come up
	I0719 04:22:32.476968  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:32.477453  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:32.477475  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:32.477396  145164 retry.go:31] will retry after 461.391177ms: waiting for machine to come up
	I0719 04:22:32.939800  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:32.940202  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:32.940225  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:32.940166  145164 retry.go:31] will retry after 690.740962ms: waiting for machine to come up
	I0719 04:22:33.632541  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:33.632968  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:33.632990  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:33.632939  145164 retry.go:31] will retry after 870.685105ms: waiting for machine to come up
	I0719 04:22:34.505012  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:34.505426  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:34.505457  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:34.505371  145164 retry.go:31] will retry after 787.01465ms: waiting for machine to come up
	I0719 04:22:35.293999  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:35.294365  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:35.294398  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:35.294309  145164 retry.go:31] will retry after 1.058390976s: waiting for machine to come up
	I0719 04:22:36.354463  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:36.354995  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:36.355025  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:36.354941  145164 retry.go:31] will retry after 1.505541373s: waiting for machine to come up
	I0719 04:22:37.862043  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:37.862525  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:37.862547  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:37.862473  145164 retry.go:31] will retry after 1.957410467s: waiting for machine to come up
	I0719 04:22:39.822568  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:39.823050  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:39.823089  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:39.823001  145164 retry.go:31] will retry after 2.175599008s: waiting for machine to come up
	I0719 04:22:41.999787  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:42.000202  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:42.000233  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:42.000150  145164 retry.go:31] will retry after 2.207076605s: waiting for machine to come up
	I0719 04:22:44.210455  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:44.210888  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:44.210912  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:44.210840  145164 retry.go:31] will retry after 2.974664162s: waiting for machine to come up
	I0719 04:22:47.188508  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:47.189032  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:47.189054  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:47.188978  145164 retry.go:31] will retry after 3.753610745s: waiting for machine to come up
	I0719 04:22:50.944522  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:50.944989  145142 main.go:141] libmachine: (ha-925161) Found IP for machine: 192.168.39.246
	I0719 04:22:50.945009  145142 main.go:141] libmachine: (ha-925161) Reserving static IP address...
	I0719 04:22:50.945022  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has current primary IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:50.945472  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find host DHCP lease matching {name: "ha-925161", mac: "52:54:00:15:c3:8c", ip: "192.168.39.246"} in network mk-ha-925161
	I0719 04:22:51.018725  145142 main.go:141] libmachine: (ha-925161) DBG | Getting to WaitForSSH function...
	I0719 04:22:51.018760  145142 main.go:141] libmachine: (ha-925161) Reserved static IP address: 192.168.39.246
	I0719 04:22:51.018774  145142 main.go:141] libmachine: (ha-925161) Waiting for SSH to be available...
	I0719 04:22:51.021353  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.021792  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.021821  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.021953  145142 main.go:141] libmachine: (ha-925161) DBG | Using SSH client type: external
	I0719 04:22:51.021980  145142 main.go:141] libmachine: (ha-925161) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa (-rw-------)
	I0719 04:22:51.022010  145142 main.go:141] libmachine: (ha-925161) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 04:22:51.022020  145142 main.go:141] libmachine: (ha-925161) DBG | About to run SSH command:
	I0719 04:22:51.022055  145142 main.go:141] libmachine: (ha-925161) DBG | exit 0
	I0719 04:22:51.145116  145142 main.go:141] libmachine: (ha-925161) DBG | SSH cmd err, output: <nil>: 
	I0719 04:22:51.145388  145142 main.go:141] libmachine: (ha-925161) KVM machine creation complete!
	I0719 04:22:51.145695  145142 main.go:141] libmachine: (ha-925161) Calling .GetConfigRaw
	I0719 04:22:51.146268  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:51.146475  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:51.146643  145142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 04:22:51.146660  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:22:51.147937  145142 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 04:22:51.147953  145142 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 04:22:51.147958  145142 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 04:22:51.147964  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.150250  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.150613  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.150639  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.150801  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.151003  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.151219  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.151391  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.151591  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.151841  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.151854  145142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 04:22:51.260174  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:22:51.260202  145142 main.go:141] libmachine: Detecting the provisioner...
	I0719 04:22:51.260213  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.262758  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.263152  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.263183  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.263360  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.263593  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.263774  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.263956  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.264129  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.264302  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.264312  145142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 04:22:51.369301  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 04:22:51.369395  145142 main.go:141] libmachine: found compatible host: buildroot
	I0719 04:22:51.369402  145142 main.go:141] libmachine: Provisioning with buildroot...
	I0719 04:22:51.369411  145142 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:22:51.369650  145142 buildroot.go:166] provisioning hostname "ha-925161"
	I0719 04:22:51.369677  145142 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:22:51.369925  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.372464  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.372803  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.372829  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.373018  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.373199  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.373367  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.373513  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.373696  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.373904  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.373920  145142 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-925161 && echo "ha-925161" | sudo tee /etc/hostname
	I0719 04:22:51.494103  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161
	
	I0719 04:22:51.494128  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.496673  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.497038  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.497078  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.497294  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.497484  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.497638  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.497755  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.497886  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.498050  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.498066  145142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-925161' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-925161/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-925161' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:22:51.613340  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:22:51.613369  145142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:22:51.613396  145142 buildroot.go:174] setting up certificates
	I0719 04:22:51.613410  145142 provision.go:84] configureAuth start
	I0719 04:22:51.613425  145142 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:22:51.613741  145142 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:22:51.616089  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.616425  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.616450  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.616644  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.618512  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.618790  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.618815  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.618929  145142 provision.go:143] copyHostCerts
	I0719 04:22:51.618970  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:22:51.619009  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:22:51.619021  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:22:51.619112  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:22:51.619208  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:22:51.619233  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:22:51.619243  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:22:51.619283  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:22:51.619389  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:22:51.619416  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:22:51.619426  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:22:51.619464  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:22:51.619532  145142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.ha-925161 san=[127.0.0.1 192.168.39.246 ha-925161 localhost minikube]
	I0719 04:22:51.663768  145142 provision.go:177] copyRemoteCerts
	I0719 04:22:51.663824  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:22:51.663850  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.666543  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.666863  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.666893  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.667035  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.667218  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.667391  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.667555  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:22:51.752405  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:22:51.752484  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:22:51.774151  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:22:51.774216  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:22:51.795937  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:22:51.796002  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0719 04:22:51.817323  145142 provision.go:87] duration metric: took 203.899941ms to configureAuth
	I0719 04:22:51.817351  145142 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:22:51.817524  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:22:51.817604  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.820662  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.821038  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.821085  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.821218  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.821432  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.821578  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.821743  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.821904  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.822074  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.822092  145142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:22:52.077205  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:22:52.077235  145142 main.go:141] libmachine: Checking connection to Docker...
	I0719 04:22:52.077245  145142 main.go:141] libmachine: (ha-925161) Calling .GetURL
	I0719 04:22:52.078520  145142 main.go:141] libmachine: (ha-925161) DBG | Using libvirt version 6000000
	I0719 04:22:52.080782  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.081163  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.081193  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.081359  145142 main.go:141] libmachine: Docker is up and running!
	I0719 04:22:52.081372  145142 main.go:141] libmachine: Reticulating splines...
	I0719 04:22:52.081380  145142 client.go:171] duration metric: took 22.213394389s to LocalClient.Create
	I0719 04:22:52.081404  145142 start.go:167] duration metric: took 22.213460023s to libmachine.API.Create "ha-925161"
	I0719 04:22:52.081414  145142 start.go:293] postStartSetup for "ha-925161" (driver="kvm2")
	I0719 04:22:52.081422  145142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:22:52.081439  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.081699  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:22:52.081730  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:52.083655  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.083904  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.083924  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.084069  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:52.084243  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.084386  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:52.084516  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:22:52.167142  145142 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:22:52.170928  145142 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:22:52.170953  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:22:52.171027  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:22:52.171144  145142 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:22:52.171159  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:22:52.171273  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:22:52.179990  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:22:52.201307  145142 start.go:296] duration metric: took 119.879736ms for postStartSetup
	I0719 04:22:52.201359  145142 main.go:141] libmachine: (ha-925161) Calling .GetConfigRaw
	I0719 04:22:52.201989  145142 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:22:52.204369  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.204678  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.204699  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.204974  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:22:52.205234  145142 start.go:128] duration metric: took 22.355495768s to createHost
	I0719 04:22:52.205264  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:52.207464  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.207757  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.207779  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.207942  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:52.208138  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.208320  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.208447  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:52.208586  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:52.208764  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:52.208782  145142 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:22:52.317415  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362972.292561103
	
	I0719 04:22:52.317439  145142 fix.go:216] guest clock: 1721362972.292561103
	I0719 04:22:52.317449  145142 fix.go:229] Guest: 2024-07-19 04:22:52.292561103 +0000 UTC Remote: 2024-07-19 04:22:52.205248354 +0000 UTC m=+22.458372431 (delta=87.312749ms)
	I0719 04:22:52.317509  145142 fix.go:200] guest clock delta is within tolerance: 87.312749ms
	I0719 04:22:52.317520  145142 start.go:83] releasing machines lock for "ha-925161", held for 22.46789615s
	I0719 04:22:52.317550  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.317844  145142 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:22:52.320096  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.320481  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.320494  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.320651  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.321136  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.321303  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.321397  145142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:22:52.321441  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:52.321569  145142 ssh_runner.go:195] Run: cat /version.json
	I0719 04:22:52.321593  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:52.323949  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.324156  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.324388  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.324415  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.324508  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:52.324679  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.324665  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.324744  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.324790  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:52.324883  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:52.324946  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.325030  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:22:52.325113  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:52.325235  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:22:52.401397  145142 ssh_runner.go:195] Run: systemctl --version
	I0719 04:22:52.437235  145142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:22:52.594635  145142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:22:52.600375  145142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:22:52.600438  145142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:22:52.614783  145142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:22:52.614809  145142 start.go:495] detecting cgroup driver to use...
	I0719 04:22:52.614879  145142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:22:52.630236  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:22:52.642797  145142 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:22:52.642858  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:22:52.654858  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:22:52.666830  145142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:22:52.781082  145142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:22:52.941832  145142 docker.go:233] disabling docker service ...
	I0719 04:22:52.941908  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:22:52.955307  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:22:52.967554  145142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:22:53.089302  145142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:22:53.210780  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:22:53.223427  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:22:53.240098  145142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:22:53.240168  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.249710  145142 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:22:53.249794  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.259149  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.268593  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.277902  145142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:22:53.287610  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.296814  145142 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.312257  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.321893  145142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:22:53.330295  145142 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 04:22:53.330338  145142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 04:22:53.341563  145142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:22:53.350032  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:22:53.467060  145142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:22:53.594661  145142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:22:53.594734  145142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:22:53.598831  145142 start.go:563] Will wait 60s for crictl version
	I0719 04:22:53.598882  145142 ssh_runner.go:195] Run: which crictl
	I0719 04:22:53.602229  145142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:22:53.635996  145142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:22:53.636094  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:22:53.661656  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:22:53.689824  145142 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:22:53.691225  145142 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:22:53.694282  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:53.694729  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:53.694748  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:53.694969  145142 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:22:53.698733  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:22:53.711205  145142 kubeadm.go:883] updating cluster {Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:22:53.711432  145142 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:22:53.711526  145142 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:22:53.743183  145142 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 04:22:53.743251  145142 ssh_runner.go:195] Run: which lz4
	I0719 04:22:53.746798  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0719 04:22:53.746880  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 04:22:53.750604  145142 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 04:22:53.750637  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 04:22:54.968122  145142 crio.go:462] duration metric: took 1.221260849s to copy over tarball
	I0719 04:22:54.968190  145142 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 04:22:57.078373  145142 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110157022s)
	I0719 04:22:57.078410  145142 crio.go:469] duration metric: took 2.11026113s to extract the tarball
	I0719 04:22:57.078418  145142 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 04:22:57.116161  145142 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:22:57.164739  145142 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:22:57.164768  145142 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:22:57.164778  145142 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.30.3 crio true true} ...
	I0719 04:22:57.164964  145142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-925161 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:22:57.165049  145142 ssh_runner.go:195] Run: crio config
	I0719 04:22:57.211378  145142 cni.go:84] Creating CNI manager for ""
	I0719 04:22:57.211395  145142 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 04:22:57.211404  145142 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:22:57.211424  145142 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-925161 NodeName:ha-925161 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:22:57.211551  145142 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-925161"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:22:57.211579  145142 kube-vip.go:115] generating kube-vip config ...
	I0719 04:22:57.211621  145142 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:22:57.230247  145142 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:22:57.230345  145142 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:22:57.230399  145142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:22:57.243255  145142 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:22:57.243312  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 04:22:57.257333  145142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 04:22:57.272554  145142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:22:57.287789  145142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 04:22:57.303104  145142 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0719 04:22:57.318165  145142 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:22:57.321758  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:22:57.332926  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:22:57.442766  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:22:57.458401  145142 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161 for IP: 192.168.39.246
	I0719 04:22:57.458424  145142 certs.go:194] generating shared ca certs ...
	I0719 04:22:57.458440  145142 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.458619  145142 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:22:57.458672  145142 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:22:57.458685  145142 certs.go:256] generating profile certs ...
	I0719 04:22:57.458746  145142 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key
	I0719 04:22:57.458764  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt with IP's: []
	I0719 04:22:57.614806  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt ...
	I0719 04:22:57.614835  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt: {Name:mk2b285240478b195a743d5dbbf2e8b1205963d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.614999  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key ...
	I0719 04:22:57.615042  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key: {Name:mk5af0dd55a6ddee32443cac6901c4084cc1af27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.615123  145142 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.eb4c9cee
	I0719 04:22:57.615138  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.eb4c9cee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.254]
	I0719 04:22:57.792532  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.eb4c9cee ...
	I0719 04:22:57.792565  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.eb4c9cee: {Name:mkeae466d3f989c23944a81afdc9c59192b64e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.792733  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.eb4c9cee ...
	I0719 04:22:57.792744  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.eb4c9cee: {Name:mkffd606373cfbf144032e67b52d14d744d79f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.792811  145142 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.eb4c9cee -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt
	I0719 04:22:57.792880  145142 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.eb4c9cee -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key
	I0719 04:22:57.792930  145142 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key
	I0719 04:22:57.792944  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt with IP's: []
	I0719 04:22:57.863362  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt ...
	I0719 04:22:57.863396  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt: {Name:mk5a457234641ef9d141c282246d2d8c5a6a8587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.863564  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key ...
	I0719 04:22:57.863574  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key: {Name:mk6de56d1e9e4ace980d9a078dcedb69f0c01037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.863648  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:22:57.863664  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:22:57.863677  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:22:57.863689  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:22:57.863698  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:22:57.863711  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:22:57.863722  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:22:57.863733  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:22:57.863776  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:22:57.863808  145142 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:22:57.863819  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:22:57.863842  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:22:57.863862  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:22:57.863882  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:22:57.863917  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:22:57.863945  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:22:57.863957  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:22:57.863969  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:22:57.864478  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:22:57.889058  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:22:57.910801  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:22:57.933050  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:22:57.954707  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 04:22:57.976926  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 04:22:57.998696  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:22:58.020435  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:22:58.041656  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:22:58.062439  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:22:58.083180  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:22:58.103892  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:22:58.118997  145142 ssh_runner.go:195] Run: openssl version
	I0719 04:22:58.124281  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:22:58.133860  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:22:58.137892  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:22:58.137938  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:22:58.143251  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:22:58.153127  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:22:58.162724  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:22:58.166937  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:22:58.166995  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:22:58.172114  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:22:58.181639  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:22:58.191344  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:22:58.195293  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:22:58.195344  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:22:58.200358  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:22:58.210174  145142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:22:58.213728  145142 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:22:58.213784  145142 kubeadm.go:392] StartCluster: {Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:22:58.213872  145142 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 04:22:58.213932  145142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 04:22:58.261349  145142 cri.go:89] found id: ""
	I0719 04:22:58.261440  145142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 04:22:58.272291  145142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 04:22:58.284767  145142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 04:22:58.295705  145142 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 04:22:58.295727  145142 kubeadm.go:157] found existing configuration files:
	
	I0719 04:22:58.295780  145142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 04:22:58.304678  145142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 04:22:58.304737  145142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 04:22:58.313231  145142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 04:22:58.321223  145142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 04:22:58.321284  145142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 04:22:58.329529  145142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 04:22:58.337801  145142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 04:22:58.337853  145142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 04:22:58.346107  145142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 04:22:58.353960  145142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 04:22:58.354010  145142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 04:22:58.362072  145142 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 04:22:58.455163  145142 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 04:22:58.455292  145142 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 04:22:58.562041  145142 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 04:22:58.562203  145142 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 04:22:58.562316  145142 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 04:22:58.740762  145142 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 04:22:58.743009  145142 out.go:204]   - Generating certificates and keys ...
	I0719 04:22:58.743131  145142 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 04:22:58.743200  145142 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 04:22:59.292694  145142 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 04:22:59.399545  145142 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 04:22:59.579278  145142 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 04:22:59.901922  145142 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 04:22:59.986580  145142 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 04:22:59.986694  145142 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-925161 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0719 04:23:00.211063  145142 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 04:23:00.211259  145142 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-925161 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0719 04:23:00.315632  145142 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 04:23:00.456874  145142 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 04:23:00.661314  145142 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 04:23:00.661380  145142 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 04:23:00.827429  145142 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 04:23:01.009407  145142 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 04:23:01.113224  145142 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 04:23:01.329786  145142 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 04:23:01.627231  145142 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 04:23:01.627729  145142 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 04:23:01.630104  145142 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 04:23:01.771182  145142 out.go:204]   - Booting up control plane ...
	I0719 04:23:01.771326  145142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 04:23:01.771440  145142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 04:23:01.771532  145142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 04:23:01.771671  145142 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 04:23:01.771808  145142 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 04:23:01.771858  145142 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 04:23:01.797635  145142 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 04:23:01.797718  145142 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 04:23:02.300388  145142 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.569979ms
	I0719 04:23:02.300511  145142 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 04:23:08.370659  145142 kubeadm.go:310] [api-check] The API server is healthy after 6.07413465s
	I0719 04:23:08.382803  145142 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 04:23:08.397124  145142 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 04:23:08.438896  145142 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 04:23:08.439110  145142 kubeadm.go:310] [mark-control-plane] Marking the node ha-925161 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 04:23:08.455201  145142 kubeadm.go:310] [bootstrap-token] Using token: ncc8dk.18bi28qrzcrx8rop
	I0719 04:23:08.456786  145142 out.go:204]   - Configuring RBAC rules ...
	I0719 04:23:08.456935  145142 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 04:23:08.462217  145142 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 04:23:08.473602  145142 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 04:23:08.477175  145142 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 04:23:08.480146  145142 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 04:23:08.483162  145142 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 04:23:08.775785  145142 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 04:23:09.232911  145142 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 04:23:09.776508  145142 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 04:23:09.776533  145142 kubeadm.go:310] 
	I0719 04:23:09.776655  145142 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 04:23:09.776697  145142 kubeadm.go:310] 
	I0719 04:23:09.776800  145142 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 04:23:09.776812  145142 kubeadm.go:310] 
	I0719 04:23:09.776856  145142 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 04:23:09.776932  145142 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 04:23:09.777012  145142 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 04:23:09.777020  145142 kubeadm.go:310] 
	I0719 04:23:09.777104  145142 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 04:23:09.777117  145142 kubeadm.go:310] 
	I0719 04:23:09.777180  145142 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 04:23:09.777191  145142 kubeadm.go:310] 
	I0719 04:23:09.777233  145142 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 04:23:09.777342  145142 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 04:23:09.777427  145142 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 04:23:09.777434  145142 kubeadm.go:310] 
	I0719 04:23:09.777535  145142 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 04:23:09.777637  145142 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 04:23:09.777645  145142 kubeadm.go:310] 
	I0719 04:23:09.777755  145142 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ncc8dk.18bi28qrzcrx8rop \
	I0719 04:23:09.777886  145142 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 \
	I0719 04:23:09.777909  145142 kubeadm.go:310] 	--control-plane 
	I0719 04:23:09.777929  145142 kubeadm.go:310] 
	I0719 04:23:09.778038  145142 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 04:23:09.778050  145142 kubeadm.go:310] 
	I0719 04:23:09.778157  145142 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ncc8dk.18bi28qrzcrx8rop \
	I0719 04:23:09.778314  145142 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 
	I0719 04:23:09.778467  145142 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 04:23:09.778484  145142 cni.go:84] Creating CNI manager for ""
	I0719 04:23:09.778493  145142 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 04:23:09.780280  145142 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 04:23:09.781555  145142 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 04:23:09.786731  145142 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 04:23:09.786755  145142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 04:23:09.804703  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 04:23:10.155467  145142 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 04:23:10.155538  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:10.155538  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-925161 minikube.k8s.io/updated_at=2024_07_19T04_23_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-925161 minikube.k8s.io/primary=true
	I0719 04:23:10.183982  145142 ops.go:34] apiserver oom_adj: -16
	I0719 04:23:10.354607  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:10.855068  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:11.354675  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:11.854872  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:12.355551  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:12.855569  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:13.354645  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:13.855579  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:14.355472  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:14.854888  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:15.355575  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:15.854963  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:16.355321  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:16.854770  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:17.354631  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:17.854931  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:18.354727  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:18.855480  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:19.354963  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:19.855677  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:20.355558  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:20.854930  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:21.354922  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:21.472004  145142 kubeadm.go:1113] duration metric: took 11.316526469s to wait for elevateKubeSystemPrivileges
	I0719 04:23:21.472045  145142 kubeadm.go:394] duration metric: took 23.258265944s to StartCluster
	I0719 04:23:21.472065  145142 settings.go:142] acquiring lock: {Name:mka29304fbead54bd9b698f9018edea7e59177cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:21.472152  145142 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:23:21.472844  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/kubeconfig: {Name:mk6e4a1b81f147a5c312ddde5acb372811581248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:21.473103  145142 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:23:21.473130  145142 start.go:241] waiting for startup goroutines ...
	I0719 04:23:21.473113  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 04:23:21.473171  145142 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 04:23:21.473266  145142 addons.go:69] Setting storage-provisioner=true in profile "ha-925161"
	I0719 04:23:21.473270  145142 addons.go:69] Setting default-storageclass=true in profile "ha-925161"
	I0719 04:23:21.473308  145142 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-925161"
	I0719 04:23:21.473325  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:23:21.473311  145142 addons.go:234] Setting addon storage-provisioner=true in "ha-925161"
	I0719 04:23:21.473420  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:23:21.473697  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.473726  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.473753  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.473786  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.488830  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46161
	I0719 04:23:21.489125  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I0719 04:23:21.489308  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.489565  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.489857  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.489883  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.490015  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.490039  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.490205  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.490326  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.490507  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:23:21.490737  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.490784  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.492853  145142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:23:21.493219  145142 kapi.go:59] client config for ha-925161: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key", CAFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 04:23:21.493756  145142 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 04:23:21.494012  145142 addons.go:234] Setting addon default-storageclass=true in "ha-925161"
	I0719 04:23:21.494061  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:23:21.494430  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.494476  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.505441  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0719 04:23:21.505878  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.506323  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.506346  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.506672  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.506878  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:23:21.508773  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:23:21.508830  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0719 04:23:21.509244  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.509764  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.509788  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.510171  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.510850  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.510896  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.511002  145142 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 04:23:21.512574  145142 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:23:21.512594  145142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 04:23:21.512614  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:23:21.515483  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:21.515925  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:23:21.515951  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:21.516105  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:23:21.516288  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:23:21.516454  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:23:21.516577  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:23:21.526157  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I0719 04:23:21.526628  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.527126  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.527150  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.527494  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.527819  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:23:21.529402  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:23:21.529638  145142 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 04:23:21.529652  145142 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 04:23:21.529670  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:23:21.532635  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:21.533021  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:23:21.533087  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:21.533331  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:23:21.533531  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:23:21.533681  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:23:21.533846  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:23:21.580823  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 04:23:21.666715  145142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 04:23:21.678076  145142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:23:22.028194  145142 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 04:23:22.035621  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.035648  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.036061  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.036080  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.036087  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.036095  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.036366  145142 main.go:141] libmachine: (ha-925161) DBG | Closing plugin on server side
	I0719 04:23:22.036368  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.036392  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.036530  145142 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0719 04:23:22.036541  145142 round_trippers.go:469] Request Headers:
	I0719 04:23:22.036550  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:23:22.036555  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:23:22.045742  145142 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:23:22.046518  145142 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 04:23:22.046542  145142 round_trippers.go:469] Request Headers:
	I0719 04:23:22.046552  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:23:22.046557  145142 round_trippers.go:473]     Content-Type: application/json
	I0719 04:23:22.046562  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:23:22.054881  145142 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:23:22.055065  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.055083  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.055366  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.055384  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.259372  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.259392  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.259666  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.259683  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.259690  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.259697  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.259953  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.259997  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.259972  145142 main.go:141] libmachine: (ha-925161) DBG | Closing plugin on server side
	I0719 04:23:22.261635  145142 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0719 04:23:22.262889  145142 addons.go:510] duration metric: took 789.717661ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0719 04:23:22.262926  145142 start.go:246] waiting for cluster config update ...
	I0719 04:23:22.262938  145142 start.go:255] writing updated cluster config ...
	I0719 04:23:22.264365  145142 out.go:177] 
	I0719 04:23:22.265565  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:23:22.265634  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:23:22.267313  145142 out.go:177] * Starting "ha-925161-m02" control-plane node in "ha-925161" cluster
	I0719 04:23:22.268379  145142 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:23:22.268424  145142 cache.go:56] Caching tarball of preloaded images
	I0719 04:23:22.268525  145142 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:23:22.268538  145142 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:23:22.268627  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:23:22.268844  145142 start.go:360] acquireMachinesLock for ha-925161-m02: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:23:22.268907  145142 start.go:364] duration metric: took 36.053µs to acquireMachinesLock for "ha-925161-m02"
	I0719 04:23:22.268928  145142 start.go:93] Provisioning new machine with config: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:23:22.269013  145142 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0719 04:23:22.270308  145142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:23:22.270405  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:22.270435  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:22.285250  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I0719 04:23:22.285656  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:22.286103  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:22.286120  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:22.286450  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:22.286707  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetMachineName
	I0719 04:23:22.286870  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:22.287064  145142 start.go:159] libmachine.API.Create for "ha-925161" (driver="kvm2")
	I0719 04:23:22.287091  145142 client.go:168] LocalClient.Create starting
	I0719 04:23:22.287126  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem
	I0719 04:23:22.287168  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:23:22.287189  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:23:22.287260  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem
	I0719 04:23:22.287286  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:23:22.287301  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:23:22.287356  145142 main.go:141] libmachine: Running pre-create checks...
	I0719 04:23:22.287371  145142 main.go:141] libmachine: (ha-925161-m02) Calling .PreCreateCheck
	I0719 04:23:22.287545  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetConfigRaw
	I0719 04:23:22.287972  145142 main.go:141] libmachine: Creating machine...
	I0719 04:23:22.287988  145142 main.go:141] libmachine: (ha-925161-m02) Calling .Create
	I0719 04:23:22.288130  145142 main.go:141] libmachine: (ha-925161-m02) Creating KVM machine...
	I0719 04:23:22.289431  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found existing default KVM network
	I0719 04:23:22.289566  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found existing private KVM network mk-ha-925161
	I0719 04:23:22.289676  145142 main.go:141] libmachine: (ha-925161-m02) Setting up store path in /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02 ...
	I0719 04:23:22.289699  145142 main.go:141] libmachine: (ha-925161-m02) Building disk image from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 04:23:22.289752  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:22.289667  145551 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:23:22.289863  145142 main.go:141] libmachine: (ha-925161-m02) Downloading /home/jenkins/minikube-integration/19302-122995/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:23:22.524417  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:22.524279  145551 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa...
	I0719 04:23:22.566631  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:22.566532  145551 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/ha-925161-m02.rawdisk...
	I0719 04:23:22.566663  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Writing magic tar header
	I0719 04:23:22.566709  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Writing SSH key tar header
	I0719 04:23:22.566735  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:22.566643  145551 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02 ...
	I0719 04:23:22.566752  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02
	I0719 04:23:22.566761  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines
	I0719 04:23:22.566770  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02 (perms=drwx------)
	I0719 04:23:22.566779  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:23:22.566788  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines (perms=drwxr-xr-x)
	I0719 04:23:22.566805  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube (perms=drwxr-xr-x)
	I0719 04:23:22.566819  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995
	I0719 04:23:22.566833  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995 (perms=drwxrwxr-x)
	I0719 04:23:22.566845  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 04:23:22.566854  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 04:23:22.566860  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 04:23:22.566867  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins
	I0719 04:23:22.566873  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home
	I0719 04:23:22.566881  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Skipping /home - not owner
	I0719 04:23:22.566911  145142 main.go:141] libmachine: (ha-925161-m02) Creating domain...
	I0719 04:23:22.567739  145142 main.go:141] libmachine: (ha-925161-m02) define libvirt domain using xml: 
	I0719 04:23:22.567749  145142 main.go:141] libmachine: (ha-925161-m02) <domain type='kvm'>
	I0719 04:23:22.567759  145142 main.go:141] libmachine: (ha-925161-m02)   <name>ha-925161-m02</name>
	I0719 04:23:22.567764  145142 main.go:141] libmachine: (ha-925161-m02)   <memory unit='MiB'>2200</memory>
	I0719 04:23:22.567769  145142 main.go:141] libmachine: (ha-925161-m02)   <vcpu>2</vcpu>
	I0719 04:23:22.567779  145142 main.go:141] libmachine: (ha-925161-m02)   <features>
	I0719 04:23:22.567786  145142 main.go:141] libmachine: (ha-925161-m02)     <acpi/>
	I0719 04:23:22.567793  145142 main.go:141] libmachine: (ha-925161-m02)     <apic/>
	I0719 04:23:22.567800  145142 main.go:141] libmachine: (ha-925161-m02)     <pae/>
	I0719 04:23:22.567806  145142 main.go:141] libmachine: (ha-925161-m02)     
	I0719 04:23:22.567814  145142 main.go:141] libmachine: (ha-925161-m02)   </features>
	I0719 04:23:22.567822  145142 main.go:141] libmachine: (ha-925161-m02)   <cpu mode='host-passthrough'>
	I0719 04:23:22.567831  145142 main.go:141] libmachine: (ha-925161-m02)   
	I0719 04:23:22.567836  145142 main.go:141] libmachine: (ha-925161-m02)   </cpu>
	I0719 04:23:22.567841  145142 main.go:141] libmachine: (ha-925161-m02)   <os>
	I0719 04:23:22.567850  145142 main.go:141] libmachine: (ha-925161-m02)     <type>hvm</type>
	I0719 04:23:22.567855  145142 main.go:141] libmachine: (ha-925161-m02)     <boot dev='cdrom'/>
	I0719 04:23:22.567865  145142 main.go:141] libmachine: (ha-925161-m02)     <boot dev='hd'/>
	I0719 04:23:22.567871  145142 main.go:141] libmachine: (ha-925161-m02)     <bootmenu enable='no'/>
	I0719 04:23:22.567876  145142 main.go:141] libmachine: (ha-925161-m02)   </os>
	I0719 04:23:22.567881  145142 main.go:141] libmachine: (ha-925161-m02)   <devices>
	I0719 04:23:22.567889  145142 main.go:141] libmachine: (ha-925161-m02)     <disk type='file' device='cdrom'>
	I0719 04:23:22.567901  145142 main.go:141] libmachine: (ha-925161-m02)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/boot2docker.iso'/>
	I0719 04:23:22.567914  145142 main.go:141] libmachine: (ha-925161-m02)       <target dev='hdc' bus='scsi'/>
	I0719 04:23:22.567922  145142 main.go:141] libmachine: (ha-925161-m02)       <readonly/>
	I0719 04:23:22.567929  145142 main.go:141] libmachine: (ha-925161-m02)     </disk>
	I0719 04:23:22.567950  145142 main.go:141] libmachine: (ha-925161-m02)     <disk type='file' device='disk'>
	I0719 04:23:22.567967  145142 main.go:141] libmachine: (ha-925161-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 04:23:22.567976  145142 main.go:141] libmachine: (ha-925161-m02)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/ha-925161-m02.rawdisk'/>
	I0719 04:23:22.567984  145142 main.go:141] libmachine: (ha-925161-m02)       <target dev='hda' bus='virtio'/>
	I0719 04:23:22.567989  145142 main.go:141] libmachine: (ha-925161-m02)     </disk>
	I0719 04:23:22.567996  145142 main.go:141] libmachine: (ha-925161-m02)     <interface type='network'>
	I0719 04:23:22.568003  145142 main.go:141] libmachine: (ha-925161-m02)       <source network='mk-ha-925161'/>
	I0719 04:23:22.568009  145142 main.go:141] libmachine: (ha-925161-m02)       <model type='virtio'/>
	I0719 04:23:22.568014  145142 main.go:141] libmachine: (ha-925161-m02)     </interface>
	I0719 04:23:22.568021  145142 main.go:141] libmachine: (ha-925161-m02)     <interface type='network'>
	I0719 04:23:22.568027  145142 main.go:141] libmachine: (ha-925161-m02)       <source network='default'/>
	I0719 04:23:22.568034  145142 main.go:141] libmachine: (ha-925161-m02)       <model type='virtio'/>
	I0719 04:23:22.568039  145142 main.go:141] libmachine: (ha-925161-m02)     </interface>
	I0719 04:23:22.568051  145142 main.go:141] libmachine: (ha-925161-m02)     <serial type='pty'>
	I0719 04:23:22.568059  145142 main.go:141] libmachine: (ha-925161-m02)       <target port='0'/>
	I0719 04:23:22.568066  145142 main.go:141] libmachine: (ha-925161-m02)     </serial>
	I0719 04:23:22.568087  145142 main.go:141] libmachine: (ha-925161-m02)     <console type='pty'>
	I0719 04:23:22.568102  145142 main.go:141] libmachine: (ha-925161-m02)       <target type='serial' port='0'/>
	I0719 04:23:22.568114  145142 main.go:141] libmachine: (ha-925161-m02)     </console>
	I0719 04:23:22.568124  145142 main.go:141] libmachine: (ha-925161-m02)     <rng model='virtio'>
	I0719 04:23:22.568135  145142 main.go:141] libmachine: (ha-925161-m02)       <backend model='random'>/dev/random</backend>
	I0719 04:23:22.568145  145142 main.go:141] libmachine: (ha-925161-m02)     </rng>
	I0719 04:23:22.568153  145142 main.go:141] libmachine: (ha-925161-m02)     
	I0719 04:23:22.568162  145142 main.go:141] libmachine: (ha-925161-m02)     
	I0719 04:23:22.568170  145142 main.go:141] libmachine: (ha-925161-m02)   </devices>
	I0719 04:23:22.568183  145142 main.go:141] libmachine: (ha-925161-m02) </domain>
	I0719 04:23:22.568212  145142 main.go:141] libmachine: (ha-925161-m02) 
	I0719 04:23:22.574696  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:4a:49:4a in network default
	I0719 04:23:22.575126  145142 main.go:141] libmachine: (ha-925161-m02) Ensuring networks are active...
	I0719 04:23:22.575140  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:22.575718  145142 main.go:141] libmachine: (ha-925161-m02) Ensuring network default is active
	I0719 04:23:22.575984  145142 main.go:141] libmachine: (ha-925161-m02) Ensuring network mk-ha-925161 is active
	I0719 04:23:22.576273  145142 main.go:141] libmachine: (ha-925161-m02) Getting domain xml...
	I0719 04:23:22.576903  145142 main.go:141] libmachine: (ha-925161-m02) Creating domain...
	I0719 04:23:23.822612  145142 main.go:141] libmachine: (ha-925161-m02) Waiting to get IP...
	I0719 04:23:23.823391  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:23.823835  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:23.823860  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:23.823808  145551 retry.go:31] will retry after 275.972565ms: waiting for machine to come up
	I0719 04:23:24.101445  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:24.101947  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:24.101976  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:24.101901  145551 retry.go:31] will retry after 260.725307ms: waiting for machine to come up
	I0719 04:23:24.364444  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:24.364955  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:24.364979  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:24.364916  145551 retry.go:31] will retry after 330.33525ms: waiting for machine to come up
	I0719 04:23:24.696430  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:24.696874  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:24.696900  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:24.696824  145551 retry.go:31] will retry after 565.545583ms: waiting for machine to come up
	I0719 04:23:25.264349  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:25.264830  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:25.264853  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:25.264796  145551 retry.go:31] will retry after 675.025996ms: waiting for machine to come up
	I0719 04:23:25.941773  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:25.942328  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:25.942354  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:25.942286  145551 retry.go:31] will retry after 916.575061ms: waiting for machine to come up
	I0719 04:23:26.860018  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:26.860488  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:26.860513  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:26.860431  145551 retry.go:31] will retry after 811.549285ms: waiting for machine to come up
	I0719 04:23:27.673180  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:27.673674  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:27.673700  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:27.673623  145551 retry.go:31] will retry after 1.317439306s: waiting for machine to come up
	I0719 04:23:28.993057  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:28.993522  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:28.993548  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:28.993475  145551 retry.go:31] will retry after 1.539873167s: waiting for machine to come up
	I0719 04:23:30.535187  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:30.535597  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:30.535624  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:30.535543  145551 retry.go:31] will retry after 1.962816348s: waiting for machine to come up
	I0719 04:23:32.500041  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:32.500533  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:32.500559  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:32.500487  145551 retry.go:31] will retry after 2.523138452s: waiting for machine to come up
	I0719 04:23:35.026265  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:35.026731  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:35.026758  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:35.026693  145551 retry.go:31] will retry after 2.642099523s: waiting for machine to come up
	I0719 04:23:37.670505  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:37.670903  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:37.670925  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:37.670868  145551 retry.go:31] will retry after 2.788794797s: waiting for machine to come up
	I0719 04:23:40.462661  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:40.463059  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:40.463087  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:40.463016  145551 retry.go:31] will retry after 5.427001191s: waiting for machine to come up
	I0719 04:23:45.893886  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:45.894410  145142 main.go:141] libmachine: (ha-925161-m02) Found IP for machine: 192.168.39.102
	I0719 04:23:45.894441  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has current primary IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:45.894450  145142 main.go:141] libmachine: (ha-925161-m02) Reserving static IP address...
	I0719 04:23:45.894803  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find host DHCP lease matching {name: "ha-925161-m02", mac: "52:54:00:17:48:0b", ip: "192.168.39.102"} in network mk-ha-925161
	I0719 04:23:45.966789  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Getting to WaitForSSH function...
	I0719 04:23:45.966826  145142 main.go:141] libmachine: (ha-925161-m02) Reserved static IP address: 192.168.39.102
	I0719 04:23:45.966840  145142 main.go:141] libmachine: (ha-925161-m02) Waiting for SSH to be available...
	I0719 04:23:45.969592  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:45.970039  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:45.970070  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:45.970209  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Using SSH client type: external
	I0719 04:23:45.970236  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa (-rw-------)
	I0719 04:23:45.970279  145142 main.go:141] libmachine: (ha-925161-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 04:23:45.970297  145142 main.go:141] libmachine: (ha-925161-m02) DBG | About to run SSH command:
	I0719 04:23:45.970315  145142 main.go:141] libmachine: (ha-925161-m02) DBG | exit 0
	I0719 04:23:46.097303  145142 main.go:141] libmachine: (ha-925161-m02) DBG | SSH cmd err, output: <nil>: 
	I0719 04:23:46.097520  145142 main.go:141] libmachine: (ha-925161-m02) KVM machine creation complete!
	I0719 04:23:46.097761  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetConfigRaw
	I0719 04:23:46.098492  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:46.098703  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:46.098898  145142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 04:23:46.098913  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:23:46.100199  145142 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 04:23:46.100218  145142 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 04:23:46.100226  145142 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 04:23:46.100233  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.102467  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.102798  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.102833  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.103085  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.103266  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.103426  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.103579  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.103731  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.103931  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.103941  145142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 04:23:46.208113  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:23:46.208139  145142 main.go:141] libmachine: Detecting the provisioner...
	I0719 04:23:46.208147  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.210813  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.211254  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.211280  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.211417  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.211599  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.211750  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.211896  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.212048  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.212210  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.212220  145142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 04:23:46.317502  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 04:23:46.317586  145142 main.go:141] libmachine: found compatible host: buildroot
	I0719 04:23:46.317599  145142 main.go:141] libmachine: Provisioning with buildroot...
	I0719 04:23:46.317612  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetMachineName
	I0719 04:23:46.317879  145142 buildroot.go:166] provisioning hostname "ha-925161-m02"
	I0719 04:23:46.317907  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetMachineName
	I0719 04:23:46.318279  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.321129  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.321504  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.321527  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.321709  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.321902  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.322063  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.322247  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.322394  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.322615  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.322634  145142 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-925161-m02 && echo "ha-925161-m02" | sudo tee /etc/hostname
	I0719 04:23:46.443271  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161-m02
	
	I0719 04:23:46.443315  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.446122  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.446458  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.446488  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.446756  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.446954  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.447142  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.447303  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.447439  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.447605  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.447622  145142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-925161-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-925161-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-925161-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:23:46.562033  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:23:46.562066  145142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:23:46.562095  145142 buildroot.go:174] setting up certificates
	I0719 04:23:46.562118  145142 provision.go:84] configureAuth start
	I0719 04:23:46.562136  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetMachineName
	I0719 04:23:46.562504  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:23:46.564747  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.565046  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.565090  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.565259  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.567404  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.567799  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.567827  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.567965  145142 provision.go:143] copyHostCerts
	I0719 04:23:46.568002  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:23:46.568043  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:23:46.568053  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:23:46.568148  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:23:46.568235  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:23:46.568258  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:23:46.568266  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:23:46.568293  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:23:46.568343  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:23:46.568360  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:23:46.568366  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:23:46.568389  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:23:46.568441  145142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.ha-925161-m02 san=[127.0.0.1 192.168.39.102 ha-925161-m02 localhost minikube]
	I0719 04:23:46.767791  145142 provision.go:177] copyRemoteCerts
	I0719 04:23:46.767850  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:23:46.767876  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.770577  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.770865  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.770890  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.771031  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.771229  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.771404  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.771542  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:23:46.855920  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:23:46.855989  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:23:46.879566  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:23:46.879642  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:23:46.901751  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:23:46.901832  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 04:23:46.923420  145142 provision.go:87] duration metric: took 361.284659ms to configureAuth
	I0719 04:23:46.923449  145142 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:23:46.923618  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:23:46.923690  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.926464  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.926812  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.926841  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.927022  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.927234  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.927409  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.927566  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.927760  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.927928  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.927942  145142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:23:47.180531  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:23:47.180559  145142 main.go:141] libmachine: Checking connection to Docker...
	I0719 04:23:47.180567  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetURL
	I0719 04:23:47.181999  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Using libvirt version 6000000
	I0719 04:23:47.184247  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.184548  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.184577  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.184722  145142 main.go:141] libmachine: Docker is up and running!
	I0719 04:23:47.184737  145142 main.go:141] libmachine: Reticulating splines...
	I0719 04:23:47.184745  145142 client.go:171] duration metric: took 24.897645776s to LocalClient.Create
	I0719 04:23:47.184774  145142 start.go:167] duration metric: took 24.897712614s to libmachine.API.Create "ha-925161"
	I0719 04:23:47.184792  145142 start.go:293] postStartSetup for "ha-925161-m02" (driver="kvm2")
	I0719 04:23:47.184810  145142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:23:47.184839  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.185138  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:23:47.185170  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:47.187457  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.187795  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.187814  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.188012  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:47.188205  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.188368  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:47.188474  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:23:47.270775  145142 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:23:47.274946  145142 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:23:47.274973  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:23:47.275048  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:23:47.275138  145142 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:23:47.275149  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:23:47.275229  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:23:47.283727  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:23:47.305890  145142 start.go:296] duration metric: took 121.078307ms for postStartSetup
	I0719 04:23:47.305940  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetConfigRaw
	I0719 04:23:47.306507  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:23:47.309329  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.309738  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.309770  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.310048  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:23:47.310226  145142 start.go:128] duration metric: took 25.041200539s to createHost
	I0719 04:23:47.310250  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:47.312540  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.312846  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.312874  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.313037  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:47.313221  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.313416  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.313546  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:47.313686  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:47.313867  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:47.313886  145142 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:23:47.421517  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363027.393588832
	
	I0719 04:23:47.421551  145142 fix.go:216] guest clock: 1721363027.393588832
	I0719 04:23:47.421562  145142 fix.go:229] Guest: 2024-07-19 04:23:47.393588832 +0000 UTC Remote: 2024-07-19 04:23:47.310238048 +0000 UTC m=+77.563362110 (delta=83.350784ms)
	I0719 04:23:47.421603  145142 fix.go:200] guest clock delta is within tolerance: 83.350784ms
	I0719 04:23:47.421615  145142 start.go:83] releasing machines lock for "ha-925161-m02", held for 25.152696164s
	I0719 04:23:47.421643  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.421933  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:23:47.424529  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.424847  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.424874  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.427070  145142 out.go:177] * Found network options:
	I0719 04:23:47.428426  145142 out.go:177]   - NO_PROXY=192.168.39.246
	W0719 04:23:47.429480  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:23:47.429512  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.430013  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.430180  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.430287  145142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:23:47.430334  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	W0719 04:23:47.430369  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:23:47.430452  145142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:23:47.430471  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:47.433224  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.433608  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.433647  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.433672  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.433810  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:47.434009  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.434144  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.434152  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:47.434170  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.434343  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:47.434346  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:23:47.434500  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.434650  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:47.434857  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:23:47.665429  145142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:23:47.670929  145142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:23:47.670995  145142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:23:47.685677  145142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:23:47.685705  145142 start.go:495] detecting cgroup driver to use...
	I0719 04:23:47.685773  145142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:23:47.701985  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:23:47.715043  145142 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:23:47.715109  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:23:47.727963  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:23:47.741231  145142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:23:47.875807  145142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:23:48.028981  145142 docker.go:233] disabling docker service ...
	I0719 04:23:48.029089  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:23:48.042094  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:23:48.053826  145142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:23:48.163798  145142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:23:48.284828  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:23:48.297864  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:23:48.315689  145142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:23:48.315752  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.325758  145142 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:23:48.325833  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.335811  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.345803  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.355829  145142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:23:48.365892  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.375462  145142 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.390864  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.400585  145142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:23:48.409893  145142 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 04:23:48.409952  145142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 04:23:48.422843  145142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:23:48.432050  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:23:48.553836  145142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:23:48.680828  145142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:23:48.680906  145142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:23:48.685127  145142 start.go:563] Will wait 60s for crictl version
	I0719 04:23:48.685196  145142 ssh_runner.go:195] Run: which crictl
	I0719 04:23:48.688577  145142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:23:48.725770  145142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:23:48.725843  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:23:48.752843  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:23:48.781297  145142 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:23:48.782551  145142 out.go:177]   - env NO_PROXY=192.168.39.246
	I0719 04:23:48.783615  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:23:48.786383  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:48.786766  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:48.786801  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:48.787041  145142 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:23:48.790762  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:23:48.802032  145142 mustload.go:65] Loading cluster: ha-925161
	I0719 04:23:48.802203  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:23:48.802483  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:48.802516  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:48.817217  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38311
	I0719 04:23:48.817735  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:48.818268  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:48.818287  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:48.818587  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:48.818799  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:23:48.820214  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:23:48.820543  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:48.820571  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:48.835311  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41189
	I0719 04:23:48.835793  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:48.836295  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:48.836324  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:48.836663  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:48.836837  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:23:48.836992  145142 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161 for IP: 192.168.39.102
	I0719 04:23:48.837014  145142 certs.go:194] generating shared ca certs ...
	I0719 04:23:48.837032  145142 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:48.837193  145142 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:23:48.837232  145142 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:23:48.837242  145142 certs.go:256] generating profile certs ...
	I0719 04:23:48.837314  145142 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key
	I0719 04:23:48.837338  145142 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.fda840c7
	I0719 04:23:48.837355  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.fda840c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.102 192.168.39.254]
	I0719 04:23:48.993970  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.fda840c7 ...
	I0719 04:23:48.994001  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.fda840c7: {Name:mk90575d4c455f79af428bec6bc32c43a03c8046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:48.994178  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.fda840c7 ...
	I0719 04:23:48.994191  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.fda840c7: {Name:mka50eebeeaf80e87f1fabc734dbcc58699400d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:48.994265  145142 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.fda840c7 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt
	I0719 04:23:48.994420  145142 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.fda840c7 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key
	I0719 04:23:48.994561  145142 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key
	I0719 04:23:48.994578  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:23:48.994591  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:23:48.994604  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:23:48.994617  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:23:48.994629  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:23:48.994640  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:23:48.994652  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:23:48.994665  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:23:48.994727  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:23:48.994755  145142 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:23:48.994765  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:23:48.994784  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:23:48.994806  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:23:48.994826  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:23:48.994860  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:23:48.994886  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:23:48.994899  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:23:48.994911  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:23:48.994943  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:23:48.997901  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:48.998309  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:23:48.998348  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:48.998487  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:23:48.998679  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:23:48.998849  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:23:48.998986  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:23:49.073502  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 04:23:49.078395  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 04:23:49.091138  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 04:23:49.095328  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0719 04:23:49.104999  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 04:23:49.108931  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 04:23:49.118880  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 04:23:49.122703  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0719 04:23:49.132049  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 04:23:49.135865  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 04:23:49.147625  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 04:23:49.154377  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0719 04:23:49.164354  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:23:49.191960  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:23:49.214646  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:23:49.236420  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:23:49.258237  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 04:23:49.280651  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 04:23:49.305378  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:23:49.327065  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:23:49.348251  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:23:49.369938  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:23:49.390999  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:23:49.412191  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 04:23:49.428401  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0719 04:23:49.443255  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 04:23:49.462359  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0719 04:23:49.478755  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 04:23:49.493969  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0719 04:23:49.509360  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 04:23:49.524139  145142 ssh_runner.go:195] Run: openssl version
	I0719 04:23:49.529351  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:23:49.539045  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:23:49.543097  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:23:49.543148  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:23:49.548609  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:23:49.558736  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:23:49.569099  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:23:49.573186  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:23:49.573243  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:23:49.578392  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:23:49.589548  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:23:49.599258  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:23:49.603298  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:23:49.603348  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:23:49.608653  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:23:49.618539  145142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:23:49.622126  145142 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:23:49.622181  145142 kubeadm.go:934] updating node {m02 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0719 04:23:49.622285  145142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-925161-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:23:49.622311  145142 kube-vip.go:115] generating kube-vip config ...
	I0719 04:23:49.622351  145142 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:23:49.638753  145142 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:23:49.638820  145142 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:23:49.638878  145142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:23:49.647909  145142 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 04:23:49.647970  145142 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 04:23:49.656432  145142 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 04:23:49.656457  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:23:49.656534  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:23:49.656547  145142 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0719 04:23:49.656576  145142 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0719 04:23:49.660150  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 04:23:49.660173  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 04:23:50.579314  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:23:50.579424  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:23:50.583872  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 04:23:50.583917  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 04:24:00.473124  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:24:00.489514  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:24:00.489617  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:24:00.493642  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 04:24:00.493674  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 04:24:00.857157  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 04:24:00.865878  145142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 04:24:00.881358  145142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:24:00.896419  145142 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:24:00.911634  145142 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:24:00.915392  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:24:00.927170  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:24:01.036650  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:24:01.053699  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:24:01.054086  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:24:01.054135  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:24:01.069107  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0719 04:24:01.069669  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:24:01.070271  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:24:01.070302  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:24:01.070636  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:24:01.070859  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:24:01.071025  145142 start.go:317] joinCluster: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:24:01.071143  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 04:24:01.071164  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:24:01.074471  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:24:01.074994  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:24:01.075023  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:24:01.075173  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:24:01.075337  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:24:01.075523  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:24:01.075642  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:24:01.228767  145142 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:24:01.228815  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9brzgf.8utu0l810f8e3ass --discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-925161-m02 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I0719 04:24:22.827882  145142 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9brzgf.8utu0l810f8e3ass --discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-925161-m02 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (21.59903646s)
	I0719 04:24:22.827926  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 04:24:23.438495  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-925161-m02 minikube.k8s.io/updated_at=2024_07_19T04_24_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-925161 minikube.k8s.io/primary=false
	I0719 04:24:23.559514  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-925161-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 04:24:23.679031  145142 start.go:319] duration metric: took 22.608003168s to joinCluster
	I0719 04:24:23.679137  145142 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:24:23.679441  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:24:23.680758  145142 out.go:177] * Verifying Kubernetes components...
	I0719 04:24:23.682098  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:24:23.924747  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:24:23.982153  145142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:24:23.982537  145142 kapi.go:59] client config for ha-925161: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key", CAFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 04:24:23.982657  145142 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0719 04:24:23.982968  145142 node_ready.go:35] waiting up to 6m0s for node "ha-925161-m02" to be "Ready" ...
	I0719 04:24:23.983126  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:23.983138  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:23.983153  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:23.983162  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:23.995423  145142 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 04:24:24.484069  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:24.484102  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:24.484113  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:24.484119  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:24.500110  145142 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0719 04:24:24.983632  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:24.983664  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:24.983677  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:24.983683  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:24.987453  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:25.483529  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:25.483552  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:25.483563  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:25.483570  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:25.486570  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:25.984092  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:25.984114  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:25.984122  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:25.984127  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:25.986806  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:25.987500  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:26.484135  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:26.484155  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:26.484164  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:26.484168  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:26.487748  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:26.983423  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:26.983449  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:26.983461  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:26.983477  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:26.986210  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:27.483515  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:27.483535  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:27.483543  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:27.483547  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:27.486181  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:27.983445  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:27.983470  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:27.983481  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:27.983487  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:27.986156  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:28.484084  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:28.484105  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:28.484112  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:28.484118  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:28.487235  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:28.487884  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:28.984128  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:28.984151  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:28.984159  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:28.984164  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:28.988241  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:24:29.483738  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:29.483765  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:29.483777  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:29.483783  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:29.486637  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:29.983290  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:29.983317  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:29.983328  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:29.983332  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:29.986486  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:30.483448  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:30.483470  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:30.483478  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:30.483481  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:30.486677  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:30.983547  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:30.983568  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:30.983575  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:30.983580  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:30.985837  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:30.986267  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:31.484210  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:31.484231  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:31.484239  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:31.484243  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:31.487434  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:31.983236  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:31.983258  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:31.983267  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:31.983273  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:31.986453  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:32.483795  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:32.483817  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:32.483826  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:32.483831  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:32.487213  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:32.983266  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:32.983288  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:32.983296  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:32.983301  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:32.985895  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:32.986511  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:33.483895  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:33.483918  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:33.483926  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:33.483930  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:33.487091  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:33.983991  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:33.984013  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:33.984021  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:33.984025  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:33.988363  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:24:34.483908  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:34.483936  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:34.483948  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:34.483955  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:34.487526  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:34.983182  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:34.983207  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:34.983215  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:34.983220  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:34.986323  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:34.986816  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:35.483191  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:35.483215  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:35.483224  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:35.483228  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:35.486266  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:35.983355  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:35.983376  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:35.983385  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:35.983389  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:35.986382  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:36.483867  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:36.483914  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:36.483927  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:36.483933  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:36.487876  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:36.983941  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:36.983964  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:36.983973  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:36.983975  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:36.987130  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:36.987656  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:37.483528  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:37.483549  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:37.483558  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:37.483564  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:37.486511  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:37.983345  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:37.983366  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:37.983373  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:37.983380  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:37.990566  145142 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:24:38.483322  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:38.483354  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:38.483363  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:38.483368  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:38.486402  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:38.984161  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:38.984183  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:38.984191  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:38.984194  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:38.987852  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:38.988478  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:39.483934  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:39.483957  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:39.483965  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:39.483968  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:39.486746  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:39.983627  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:39.983653  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:39.983661  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:39.983666  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:39.987134  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:40.483388  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:40.483413  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:40.483422  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:40.483427  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:40.486525  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:40.984049  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:40.984071  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:40.984079  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:40.984082  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:40.987243  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:41.483499  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:41.483521  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.483529  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.483532  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.486473  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.487029  145142 node_ready.go:49] node "ha-925161-m02" has status "Ready":"True"
	I0719 04:24:41.487047  145142 node_ready.go:38] duration metric: took 17.504036182s for node "ha-925161-m02" to be "Ready" ...
	I0719 04:24:41.487055  145142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:24:41.487155  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:41.487166  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.487178  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.487187  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.491881  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:24:41.497481  145142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.497561  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7wzcg
	I0719 04:24:41.497570  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.497577  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.497582  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.500114  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.500671  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.500687  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.500695  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.500700  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.503362  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.506243  145142 pod_ready.go:92] pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.506264  145142 pod_ready.go:81] duration metric: took 8.757705ms for pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.506273  145142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.506325  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hwdsq
	I0719 04:24:41.506332  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.506340  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.506343  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.508774  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.509717  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.509734  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.509741  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.509745  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.511828  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.512452  145142 pod_ready.go:92] pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.512466  145142 pod_ready.go:81] duration metric: took 6.187276ms for pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.512474  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.512520  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161
	I0719 04:24:41.512527  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.512533  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.512537  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.514760  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.515247  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.515261  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.515268  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.515273  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.517392  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.518029  145142 pod_ready.go:92] pod "etcd-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.518040  145142 pod_ready.go:81] duration metric: took 5.560858ms for pod "etcd-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.518062  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.518108  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161-m02
	I0719 04:24:41.518117  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.518129  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.518137  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.520250  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.520719  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:41.520731  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.520737  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.520741  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.522882  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.523335  145142 pod_ready.go:92] pod "etcd-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.523350  145142 pod_ready.go:81] duration metric: took 5.280299ms for pod "etcd-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.523363  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.683694  145142 request.go:629] Waited for 160.274101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161
	I0719 04:24:41.683768  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161
	I0719 04:24:41.683776  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.683784  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.683789  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.686762  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.883724  145142 request.go:629] Waited for 196.348187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.883811  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.883818  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.883826  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.883830  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.886885  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:41.887451  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.887473  145142 pod_ready.go:81] duration metric: took 364.101211ms for pod "kube-apiserver-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.887482  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.083505  145142 request.go:629] Waited for 195.9553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m02
	I0719 04:24:42.083574  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m02
	I0719 04:24:42.083580  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.083588  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.083595  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.087185  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:42.284189  145142 request.go:629] Waited for 196.390812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:42.284250  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:42.284256  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.284267  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.284273  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.287216  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:42.287756  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:42.287775  145142 pod_ready.go:81] duration metric: took 400.286107ms for pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.287785  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.484342  145142 request.go:629] Waited for 196.491923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161
	I0719 04:24:42.484401  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161
	I0719 04:24:42.484406  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.484414  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.484417  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.487884  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:42.683962  145142 request.go:629] Waited for 195.25386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:42.684032  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:42.684039  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.684054  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.684061  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.687387  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:42.687963  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:42.687981  145142 pod_ready.go:81] duration metric: took 400.190541ms for pod "kube-controller-manager-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.687992  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.884148  145142 request.go:629] Waited for 196.059016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m02
	I0719 04:24:42.884220  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m02
	I0719 04:24:42.884227  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.884241  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.884248  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.887682  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.083653  145142 request.go:629] Waited for 195.282224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:43.083743  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:43.083749  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.083772  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.083791  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.086880  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.088769  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:43.088788  145142 pod_ready.go:81] duration metric: took 400.789348ms for pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.088798  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8dbqt" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.283909  145142 request.go:629] Waited for 195.041931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dbqt
	I0719 04:24:43.283990  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dbqt
	I0719 04:24:43.283995  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.284001  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.284006  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.287323  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.484238  145142 request.go:629] Waited for 196.366124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:43.484313  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:43.484320  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.484329  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.484336  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.487830  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.488553  145142 pod_ready.go:92] pod "kube-proxy-8dbqt" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:43.488576  145142 pod_ready.go:81] duration metric: took 399.770059ms for pod "kube-proxy-8dbqt" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.488589  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6df4" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.683505  145142 request.go:629] Waited for 194.836143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6df4
	I0719 04:24:43.683582  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6df4
	I0719 04:24:43.683587  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.683596  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.683601  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.686684  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.884063  145142 request.go:629] Waited for 196.777643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:43.884159  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:43.884165  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.884175  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.884180  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.887036  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:43.887613  145142 pod_ready.go:92] pod "kube-proxy-s6df4" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:43.887632  145142 pod_ready.go:81] duration metric: took 399.029983ms for pod "kube-proxy-s6df4" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.887644  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:44.083814  145142 request.go:629] Waited for 196.092093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161
	I0719 04:24:44.083875  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161
	I0719 04:24:44.083880  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.083888  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.083891  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.086868  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:44.283811  145142 request.go:629] Waited for 196.379807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:44.283868  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:44.283874  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.283887  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.283895  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.287178  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:44.287810  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:44.287832  145142 pod_ready.go:81] duration metric: took 400.18128ms for pod "kube-scheduler-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:44.287843  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:44.483879  145142 request.go:629] Waited for 195.944853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m02
	I0719 04:24:44.483959  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m02
	I0719 04:24:44.483968  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.483983  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.483991  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.486930  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:44.684019  145142 request.go:629] Waited for 196.375072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:44.684110  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:44.684119  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.684127  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.684132  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.687081  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:44.687679  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:44.687700  145142 pod_ready.go:81] duration metric: took 399.847674ms for pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:44.687711  145142 pod_ready.go:38] duration metric: took 3.200605814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:24:44.687729  145142 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:24:44.687795  145142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:24:44.702903  145142 api_server.go:72] duration metric: took 21.023722699s to wait for apiserver process to appear ...
	I0719 04:24:44.702931  145142 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:24:44.702955  145142 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0719 04:24:44.712256  145142 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0719 04:24:44.712320  145142 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0719 04:24:44.712327  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.712335  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.712340  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.713127  145142 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 04:24:44.713222  145142 api_server.go:141] control plane version: v1.30.3
	I0719 04:24:44.713237  145142 api_server.go:131] duration metric: took 10.299058ms to wait for apiserver health ...
	I0719 04:24:44.713245  145142 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:24:44.883646  145142 request.go:629] Waited for 170.322673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:44.883704  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:44.883711  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.883719  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.883726  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.889407  145142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:24:44.893381  145142 system_pods.go:59] 17 kube-system pods found
	I0719 04:24:44.893406  145142 system_pods.go:61] "coredns-7db6d8ff4d-7wzcg" [a434f69a-903d-4961-a54c-9a85cbc694b1] Running
	I0719 04:24:44.893411  145142 system_pods.go:61] "coredns-7db6d8ff4d-hwdsq" [894f9528-78da-4cae-9ec6-8e82a3e73264] Running
	I0719 04:24:44.893415  145142 system_pods.go:61] "etcd-ha-925161" [35b14af9-6e7d-4e5c-8c43-fa427109cde3] Running
	I0719 04:24:44.893419  145142 system_pods.go:61] "etcd-ha-925161-m02" [51f60536-03dc-4426-ac13-9d2ec33275f7] Running
	I0719 04:24:44.893422  145142 system_pods.go:61] "kindnet-dkctc" [4ec93698-4a91-44fa-a37f-405bf1a5fa95] Running
	I0719 04:24:44.893424  145142 system_pods.go:61] "kindnet-fsr5f" [988e1118-927a-4468-ba25-3a78d8d06919] Running
	I0719 04:24:44.893428  145142 system_pods.go:61] "kube-apiserver-ha-925161" [1c56f8e6-beb8-4dcc-ba56-5097516043a6] Running
	I0719 04:24:44.893432  145142 system_pods.go:61] "kube-apiserver-ha-925161-m02" [ceaa5f20-d023-482a-9905-54f8bc47da20] Running
	I0719 04:24:44.893436  145142 system_pods.go:61] "kube-controller-manager-ha-925161" [337e75e4-92e9-48fd-a46a-73ce174b4995] Running
	I0719 04:24:44.893439  145142 system_pods.go:61] "kube-controller-manager-ha-925161-m02" [d2d234a3-a18f-4618-9b77-4bcf771463b8] Running
	I0719 04:24:44.893444  145142 system_pods.go:61] "kube-proxy-8dbqt" [cd11aac3-62df-4603-8102-3384bcc100f1] Running
	I0719 04:24:44.893450  145142 system_pods.go:61] "kube-proxy-s6df4" [3373d2d8-4189-48a0-aefc-2ad0511b2a6b] Running
	I0719 04:24:44.893453  145142 system_pods.go:61] "kube-scheduler-ha-925161" [6c1c9f30-93c9-4def-b54e-97b8e27cd12b] Running
	I0719 04:24:44.893456  145142 system_pods.go:61] "kube-scheduler-ha-925161-m02" [60ea2e22-0456-40bc-bddd-32b6737350b3] Running
	I0719 04:24:44.893459  145142 system_pods.go:61] "kube-vip-ha-925161" [8d01a874-336e-476c-b079-852250b3bbcd] Running
	I0719 04:24:44.893462  145142 system_pods.go:61] "kube-vip-ha-925161-m02" [0cb6b1ed-566b-4f64-903b-5af108816970] Running
	I0719 04:24:44.893467  145142 system_pods.go:61] "storage-provisioner" [bf27da3d-f736-4742-9af5-2c0a024075ec] Running
	I0719 04:24:44.893473  145142 system_pods.go:74] duration metric: took 180.220665ms to wait for pod list to return data ...
	I0719 04:24:44.893483  145142 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:24:45.083908  145142 request.go:629] Waited for 190.345344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:24:45.083977  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:24:45.083985  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:45.083996  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:45.084003  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:45.087061  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:45.087310  145142 default_sa.go:45] found service account: "default"
	I0719 04:24:45.087332  145142 default_sa.go:55] duration metric: took 193.841784ms for default service account to be created ...
	I0719 04:24:45.087351  145142 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:24:45.283715  145142 request.go:629] Waited for 196.280501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:45.283788  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:45.283796  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:45.283804  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:45.283809  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:45.288696  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:24:45.293002  145142 system_pods.go:86] 17 kube-system pods found
	I0719 04:24:45.293031  145142 system_pods.go:89] "coredns-7db6d8ff4d-7wzcg" [a434f69a-903d-4961-a54c-9a85cbc694b1] Running
	I0719 04:24:45.293039  145142 system_pods.go:89] "coredns-7db6d8ff4d-hwdsq" [894f9528-78da-4cae-9ec6-8e82a3e73264] Running
	I0719 04:24:45.293045  145142 system_pods.go:89] "etcd-ha-925161" [35b14af9-6e7d-4e5c-8c43-fa427109cde3] Running
	I0719 04:24:45.293051  145142 system_pods.go:89] "etcd-ha-925161-m02" [51f60536-03dc-4426-ac13-9d2ec33275f7] Running
	I0719 04:24:45.293057  145142 system_pods.go:89] "kindnet-dkctc" [4ec93698-4a91-44fa-a37f-405bf1a5fa95] Running
	I0719 04:24:45.293073  145142 system_pods.go:89] "kindnet-fsr5f" [988e1118-927a-4468-ba25-3a78d8d06919] Running
	I0719 04:24:45.293080  145142 system_pods.go:89] "kube-apiserver-ha-925161" [1c56f8e6-beb8-4dcc-ba56-5097516043a6] Running
	I0719 04:24:45.293087  145142 system_pods.go:89] "kube-apiserver-ha-925161-m02" [ceaa5f20-d023-482a-9905-54f8bc47da20] Running
	I0719 04:24:45.293094  145142 system_pods.go:89] "kube-controller-manager-ha-925161" [337e75e4-92e9-48fd-a46a-73ce174b4995] Running
	I0719 04:24:45.293101  145142 system_pods.go:89] "kube-controller-manager-ha-925161-m02" [d2d234a3-a18f-4618-9b77-4bcf771463b8] Running
	I0719 04:24:45.293117  145142 system_pods.go:89] "kube-proxy-8dbqt" [cd11aac3-62df-4603-8102-3384bcc100f1] Running
	I0719 04:24:45.293125  145142 system_pods.go:89] "kube-proxy-s6df4" [3373d2d8-4189-48a0-aefc-2ad0511b2a6b] Running
	I0719 04:24:45.293131  145142 system_pods.go:89] "kube-scheduler-ha-925161" [6c1c9f30-93c9-4def-b54e-97b8e27cd12b] Running
	I0719 04:24:45.293138  145142 system_pods.go:89] "kube-scheduler-ha-925161-m02" [60ea2e22-0456-40bc-bddd-32b6737350b3] Running
	I0719 04:24:45.293145  145142 system_pods.go:89] "kube-vip-ha-925161" [8d01a874-336e-476c-b079-852250b3bbcd] Running
	I0719 04:24:45.293151  145142 system_pods.go:89] "kube-vip-ha-925161-m02" [0cb6b1ed-566b-4f64-903b-5af108816970] Running
	I0719 04:24:45.293157  145142 system_pods.go:89] "storage-provisioner" [bf27da3d-f736-4742-9af5-2c0a024075ec] Running
	I0719 04:24:45.293168  145142 system_pods.go:126] duration metric: took 205.808287ms to wait for k8s-apps to be running ...
	I0719 04:24:45.293180  145142 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:24:45.293234  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:24:45.306948  145142 system_svc.go:56] duration metric: took 13.758933ms WaitForService to wait for kubelet
	I0719 04:24:45.306981  145142 kubeadm.go:582] duration metric: took 21.627805849s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:24:45.307006  145142 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:24:45.484291  145142 request.go:629] Waited for 177.207278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0719 04:24:45.484368  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0719 04:24:45.484376  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:45.484386  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:45.484396  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:45.487559  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:45.488510  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:24:45.488533  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:24:45.488548  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:24:45.488552  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:24:45.488558  145142 node_conditions.go:105] duration metric: took 181.546937ms to run NodePressure ...
	I0719 04:24:45.488572  145142 start.go:241] waiting for startup goroutines ...
	I0719 04:24:45.488604  145142 start.go:255] writing updated cluster config ...
	I0719 04:24:45.490487  145142 out.go:177] 
	I0719 04:24:45.491857  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:24:45.492021  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:24:45.493666  145142 out.go:177] * Starting "ha-925161-m03" control-plane node in "ha-925161" cluster
	I0719 04:24:45.494700  145142 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:24:45.494718  145142 cache.go:56] Caching tarball of preloaded images
	I0719 04:24:45.494818  145142 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:24:45.494831  145142 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:24:45.494912  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:24:45.495065  145142 start.go:360] acquireMachinesLock for ha-925161-m03: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:24:45.495118  145142 start.go:364] duration metric: took 31.277µs to acquireMachinesLock for "ha-925161-m03"
	I0719 04:24:45.495140  145142 start.go:93] Provisioning new machine with config: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:24:45.495233  145142 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0719 04:24:45.496679  145142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:24:45.496756  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:24:45.496794  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:24:45.512273  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40107
	I0719 04:24:45.512703  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:24:45.513189  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:24:45.513209  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:24:45.513532  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:24:45.513756  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetMachineName
	I0719 04:24:45.513896  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:24:45.514043  145142 start.go:159] libmachine.API.Create for "ha-925161" (driver="kvm2")
	I0719 04:24:45.514078  145142 client.go:168] LocalClient.Create starting
	I0719 04:24:45.514113  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem
	I0719 04:24:45.514150  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:24:45.514167  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:24:45.514234  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem
	I0719 04:24:45.514256  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:24:45.514269  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:24:45.514293  145142 main.go:141] libmachine: Running pre-create checks...
	I0719 04:24:45.514304  145142 main.go:141] libmachine: (ha-925161-m03) Calling .PreCreateCheck
	I0719 04:24:45.514493  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetConfigRaw
	I0719 04:24:45.514962  145142 main.go:141] libmachine: Creating machine...
	I0719 04:24:45.514981  145142 main.go:141] libmachine: (ha-925161-m03) Calling .Create
	I0719 04:24:45.515160  145142 main.go:141] libmachine: (ha-925161-m03) Creating KVM machine...
	I0719 04:24:45.516466  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found existing default KVM network
	I0719 04:24:45.516574  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found existing private KVM network mk-ha-925161
	I0719 04:24:45.516795  145142 main.go:141] libmachine: (ha-925161-m03) Setting up store path in /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03 ...
	I0719 04:24:45.516819  145142 main.go:141] libmachine: (ha-925161-m03) Building disk image from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 04:24:45.516872  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:45.516768  145993 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:24:45.516968  145142 main.go:141] libmachine: (ha-925161-m03) Downloading /home/jenkins/minikube-integration/19302-122995/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:24:45.748018  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:45.747871  145993 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa...
	I0719 04:24:45.793443  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:45.793312  145993 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/ha-925161-m03.rawdisk...
	I0719 04:24:45.793472  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Writing magic tar header
	I0719 04:24:45.793482  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Writing SSH key tar header
	I0719 04:24:45.793493  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:45.793428  145993 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03 ...
	I0719 04:24:45.793583  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03
	I0719 04:24:45.793605  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines
	I0719 04:24:45.793617  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03 (perms=drwx------)
	I0719 04:24:45.793631  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:24:45.793647  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995
	I0719 04:24:45.793659  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 04:24:45.793672  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines (perms=drwxr-xr-x)
	I0719 04:24:45.793690  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube (perms=drwxr-xr-x)
	I0719 04:24:45.793701  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995 (perms=drwxrwxr-x)
	I0719 04:24:45.793713  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins
	I0719 04:24:45.793730  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home
	I0719 04:24:45.793743  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 04:24:45.793754  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Skipping /home - not owner
	I0719 04:24:45.793768  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 04:24:45.793778  145142 main.go:141] libmachine: (ha-925161-m03) Creating domain...
	I0719 04:24:45.794631  145142 main.go:141] libmachine: (ha-925161-m03) define libvirt domain using xml: 
	I0719 04:24:45.794657  145142 main.go:141] libmachine: (ha-925161-m03) <domain type='kvm'>
	I0719 04:24:45.794673  145142 main.go:141] libmachine: (ha-925161-m03)   <name>ha-925161-m03</name>
	I0719 04:24:45.794681  145142 main.go:141] libmachine: (ha-925161-m03)   <memory unit='MiB'>2200</memory>
	I0719 04:24:45.794712  145142 main.go:141] libmachine: (ha-925161-m03)   <vcpu>2</vcpu>
	I0719 04:24:45.794734  145142 main.go:141] libmachine: (ha-925161-m03)   <features>
	I0719 04:24:45.794743  145142 main.go:141] libmachine: (ha-925161-m03)     <acpi/>
	I0719 04:24:45.794750  145142 main.go:141] libmachine: (ha-925161-m03)     <apic/>
	I0719 04:24:45.794756  145142 main.go:141] libmachine: (ha-925161-m03)     <pae/>
	I0719 04:24:45.794764  145142 main.go:141] libmachine: (ha-925161-m03)     
	I0719 04:24:45.794772  145142 main.go:141] libmachine: (ha-925161-m03)   </features>
	I0719 04:24:45.794784  145142 main.go:141] libmachine: (ha-925161-m03)   <cpu mode='host-passthrough'>
	I0719 04:24:45.794797  145142 main.go:141] libmachine: (ha-925161-m03)   
	I0719 04:24:45.794804  145142 main.go:141] libmachine: (ha-925161-m03)   </cpu>
	I0719 04:24:45.794826  145142 main.go:141] libmachine: (ha-925161-m03)   <os>
	I0719 04:24:45.794846  145142 main.go:141] libmachine: (ha-925161-m03)     <type>hvm</type>
	I0719 04:24:45.794856  145142 main.go:141] libmachine: (ha-925161-m03)     <boot dev='cdrom'/>
	I0719 04:24:45.794866  145142 main.go:141] libmachine: (ha-925161-m03)     <boot dev='hd'/>
	I0719 04:24:45.794876  145142 main.go:141] libmachine: (ha-925161-m03)     <bootmenu enable='no'/>
	I0719 04:24:45.794885  145142 main.go:141] libmachine: (ha-925161-m03)   </os>
	I0719 04:24:45.794893  145142 main.go:141] libmachine: (ha-925161-m03)   <devices>
	I0719 04:24:45.794904  145142 main.go:141] libmachine: (ha-925161-m03)     <disk type='file' device='cdrom'>
	I0719 04:24:45.794925  145142 main.go:141] libmachine: (ha-925161-m03)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/boot2docker.iso'/>
	I0719 04:24:45.794942  145142 main.go:141] libmachine: (ha-925161-m03)       <target dev='hdc' bus='scsi'/>
	I0719 04:24:45.794949  145142 main.go:141] libmachine: (ha-925161-m03)       <readonly/>
	I0719 04:24:45.794954  145142 main.go:141] libmachine: (ha-925161-m03)     </disk>
	I0719 04:24:45.794960  145142 main.go:141] libmachine: (ha-925161-m03)     <disk type='file' device='disk'>
	I0719 04:24:45.794969  145142 main.go:141] libmachine: (ha-925161-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 04:24:45.794981  145142 main.go:141] libmachine: (ha-925161-m03)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/ha-925161-m03.rawdisk'/>
	I0719 04:24:45.794987  145142 main.go:141] libmachine: (ha-925161-m03)       <target dev='hda' bus='virtio'/>
	I0719 04:24:45.794992  145142 main.go:141] libmachine: (ha-925161-m03)     </disk>
	I0719 04:24:45.795001  145142 main.go:141] libmachine: (ha-925161-m03)     <interface type='network'>
	I0719 04:24:45.795007  145142 main.go:141] libmachine: (ha-925161-m03)       <source network='mk-ha-925161'/>
	I0719 04:24:45.795013  145142 main.go:141] libmachine: (ha-925161-m03)       <model type='virtio'/>
	I0719 04:24:45.795024  145142 main.go:141] libmachine: (ha-925161-m03)     </interface>
	I0719 04:24:45.795035  145142 main.go:141] libmachine: (ha-925161-m03)     <interface type='network'>
	I0719 04:24:45.795049  145142 main.go:141] libmachine: (ha-925161-m03)       <source network='default'/>
	I0719 04:24:45.795060  145142 main.go:141] libmachine: (ha-925161-m03)       <model type='virtio'/>
	I0719 04:24:45.795069  145142 main.go:141] libmachine: (ha-925161-m03)     </interface>
	I0719 04:24:45.795081  145142 main.go:141] libmachine: (ha-925161-m03)     <serial type='pty'>
	I0719 04:24:45.795090  145142 main.go:141] libmachine: (ha-925161-m03)       <target port='0'/>
	I0719 04:24:45.795100  145142 main.go:141] libmachine: (ha-925161-m03)     </serial>
	I0719 04:24:45.795120  145142 main.go:141] libmachine: (ha-925161-m03)     <console type='pty'>
	I0719 04:24:45.795133  145142 main.go:141] libmachine: (ha-925161-m03)       <target type='serial' port='0'/>
	I0719 04:24:45.795144  145142 main.go:141] libmachine: (ha-925161-m03)     </console>
	I0719 04:24:45.795158  145142 main.go:141] libmachine: (ha-925161-m03)     <rng model='virtio'>
	I0719 04:24:45.795171  145142 main.go:141] libmachine: (ha-925161-m03)       <backend model='random'>/dev/random</backend>
	I0719 04:24:45.795180  145142 main.go:141] libmachine: (ha-925161-m03)     </rng>
	I0719 04:24:45.795188  145142 main.go:141] libmachine: (ha-925161-m03)     
	I0719 04:24:45.795197  145142 main.go:141] libmachine: (ha-925161-m03)     
	I0719 04:24:45.795206  145142 main.go:141] libmachine: (ha-925161-m03)   </devices>
	I0719 04:24:45.795215  145142 main.go:141] libmachine: (ha-925161-m03) </domain>
	I0719 04:24:45.795234  145142 main.go:141] libmachine: (ha-925161-m03) 
	I0719 04:24:45.802289  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:eb:36:80 in network default
	I0719 04:24:45.802865  145142 main.go:141] libmachine: (ha-925161-m03) Ensuring networks are active...
	I0719 04:24:45.802887  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:45.803742  145142 main.go:141] libmachine: (ha-925161-m03) Ensuring network default is active
	I0719 04:24:45.804122  145142 main.go:141] libmachine: (ha-925161-m03) Ensuring network mk-ha-925161 is active
	I0719 04:24:45.804522  145142 main.go:141] libmachine: (ha-925161-m03) Getting domain xml...
	I0719 04:24:45.805309  145142 main.go:141] libmachine: (ha-925161-m03) Creating domain...
	I0719 04:24:47.015997  145142 main.go:141] libmachine: (ha-925161-m03) Waiting to get IP...
	I0719 04:24:47.016773  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:47.017215  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:47.017233  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:47.017197  145993 retry.go:31] will retry after 277.025133ms: waiting for machine to come up
	I0719 04:24:47.295814  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:47.296340  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:47.296373  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:47.296303  145993 retry.go:31] will retry after 346.173005ms: waiting for machine to come up
	I0719 04:24:47.643714  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:47.644205  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:47.644232  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:47.644151  145993 retry.go:31] will retry after 354.698058ms: waiting for machine to come up
	I0719 04:24:48.000724  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:48.001183  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:48.001206  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:48.001147  145993 retry.go:31] will retry after 455.182254ms: waiting for machine to come up
	I0719 04:24:48.457709  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:48.458155  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:48.458178  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:48.458122  145993 retry.go:31] will retry after 521.468381ms: waiting for machine to come up
	I0719 04:24:48.981537  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:48.981867  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:48.981921  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:48.981819  145993 retry.go:31] will retry after 619.202661ms: waiting for machine to come up
	I0719 04:24:49.602142  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:49.602622  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:49.602647  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:49.602581  145993 retry.go:31] will retry after 1.090091658s: waiting for machine to come up
	I0719 04:24:50.694118  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:50.694561  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:50.694596  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:50.694532  145993 retry.go:31] will retry after 1.444482953s: waiting for machine to come up
	I0719 04:24:52.140189  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:52.140684  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:52.140716  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:52.140619  145993 retry.go:31] will retry after 1.264022258s: waiting for machine to come up
	I0719 04:24:53.406252  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:53.406758  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:53.406781  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:53.406722  145993 retry.go:31] will retry after 1.423444201s: waiting for machine to come up
	I0719 04:24:54.831522  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:54.832037  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:54.832062  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:54.831981  145993 retry.go:31] will retry after 2.511156737s: waiting for machine to come up
	I0719 04:24:57.344288  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:57.344562  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:57.344591  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:57.344511  145993 retry.go:31] will retry after 3.426540062s: waiting for machine to come up
	I0719 04:25:00.773262  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:00.773769  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:25:00.773799  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:25:00.773727  145993 retry.go:31] will retry after 4.350683357s: waiting for machine to come up
	I0719 04:25:05.126142  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:05.126708  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has current primary IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:05.126726  145142 main.go:141] libmachine: (ha-925161-m03) Found IP for machine: 192.168.39.190
	I0719 04:25:05.126739  145142 main.go:141] libmachine: (ha-925161-m03) Reserving static IP address...
	I0719 04:25:05.127121  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find host DHCP lease matching {name: "ha-925161-m03", mac: "52:54:00:7e:5f:eb", ip: "192.168.39.190"} in network mk-ha-925161
	I0719 04:25:05.201307  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Getting to WaitForSSH function...
	I0719 04:25:05.201347  145142 main.go:141] libmachine: (ha-925161-m03) Reserved static IP address: 192.168.39.190
	I0719 04:25:05.201361  145142 main.go:141] libmachine: (ha-925161-m03) Waiting for SSH to be available...
	I0719 04:25:05.203824  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:05.204186  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161
	I0719 04:25:05.204212  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find defined IP address of network mk-ha-925161 interface with MAC address 52:54:00:7e:5f:eb
	I0719 04:25:05.204403  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using SSH client type: external
	I0719 04:25:05.204429  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa (-rw-------)
	I0719 04:25:05.204464  145142 main.go:141] libmachine: (ha-925161-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 04:25:05.204479  145142 main.go:141] libmachine: (ha-925161-m03) DBG | About to run SSH command:
	I0719 04:25:05.204509  145142 main.go:141] libmachine: (ha-925161-m03) DBG | exit 0
	I0719 04:25:05.208140  145142 main.go:141] libmachine: (ha-925161-m03) DBG | SSH cmd err, output: exit status 255: 
	I0719 04:25:05.208162  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0719 04:25:05.208169  145142 main.go:141] libmachine: (ha-925161-m03) DBG | command : exit 0
	I0719 04:25:05.208175  145142 main.go:141] libmachine: (ha-925161-m03) DBG | err     : exit status 255
	I0719 04:25:05.208213  145142 main.go:141] libmachine: (ha-925161-m03) DBG | output  : 
	I0719 04:25:08.210191  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Getting to WaitForSSH function...
	I0719 04:25:08.212633  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.213024  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.213060  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.213147  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using SSH client type: external
	I0719 04:25:08.213184  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa (-rw-------)
	I0719 04:25:08.213215  145142 main.go:141] libmachine: (ha-925161-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 04:25:08.213228  145142 main.go:141] libmachine: (ha-925161-m03) DBG | About to run SSH command:
	I0719 04:25:08.213255  145142 main.go:141] libmachine: (ha-925161-m03) DBG | exit 0
	I0719 04:25:08.336885  145142 main.go:141] libmachine: (ha-925161-m03) DBG | SSH cmd err, output: <nil>: 
	I0719 04:25:08.337176  145142 main.go:141] libmachine: (ha-925161-m03) KVM machine creation complete!
	I0719 04:25:08.337537  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetConfigRaw
	I0719 04:25:08.338098  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:08.338325  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:08.338498  145142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 04:25:08.338516  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:25:08.339906  145142 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 04:25:08.339923  145142 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 04:25:08.339931  145142 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 04:25:08.339941  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.342374  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.342802  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.342832  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.343011  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.343210  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.343453  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.343660  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.343828  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:08.344130  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:08.344148  145142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 04:25:08.444238  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:25:08.444261  145142 main.go:141] libmachine: Detecting the provisioner...
	I0719 04:25:08.444270  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.447342  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.447711  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.447737  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.447949  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.448156  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.448292  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.448399  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.448600  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:08.448806  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:08.448822  145142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 04:25:08.549808  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 04:25:08.549874  145142 main.go:141] libmachine: found compatible host: buildroot
	I0719 04:25:08.549885  145142 main.go:141] libmachine: Provisioning with buildroot...
	I0719 04:25:08.549906  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetMachineName
	I0719 04:25:08.550207  145142 buildroot.go:166] provisioning hostname "ha-925161-m03"
	I0719 04:25:08.550237  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetMachineName
	I0719 04:25:08.550439  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.552967  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.553374  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.553395  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.553561  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.553730  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.553856  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.554001  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.554204  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:08.554363  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:08.554378  145142 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-925161-m03 && echo "ha-925161-m03" | sudo tee /etc/hostname
	I0719 04:25:08.670792  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161-m03
	
	I0719 04:25:08.670838  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.673865  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.674347  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.674378  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.674677  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.674938  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.675116  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.675268  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.675418  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:08.675614  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:08.675633  145142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-925161-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-925161-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-925161-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:25:08.785771  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:25:08.785805  145142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:25:08.785829  145142 buildroot.go:174] setting up certificates
	I0719 04:25:08.785843  145142 provision.go:84] configureAuth start
	I0719 04:25:08.785859  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetMachineName
	I0719 04:25:08.786159  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:25:08.788778  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.789202  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.789238  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.789471  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.791902  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.792363  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.792394  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.792516  145142 provision.go:143] copyHostCerts
	I0719 04:25:08.792550  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:25:08.792587  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:25:08.792598  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:25:08.792677  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:25:08.792774  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:25:08.792799  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:25:08.792809  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:25:08.792845  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:25:08.792906  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:25:08.792929  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:25:08.792937  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:25:08.792973  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:25:08.793041  145142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.ha-925161-m03 san=[127.0.0.1 192.168.39.190 ha-925161-m03 localhost minikube]
	I0719 04:25:08.931698  145142 provision.go:177] copyRemoteCerts
	I0719 04:25:08.931756  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:25:08.931784  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.934674  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.935001  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.935023  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.935337  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.935539  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.935681  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.935811  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:25:09.014813  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:25:09.014894  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:25:09.037362  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:25:09.037428  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 04:25:09.059453  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:25:09.059533  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:25:09.081377  145142 provision.go:87] duration metric: took 295.517176ms to configureAuth
	I0719 04:25:09.081407  145142 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:25:09.081666  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:25:09.081764  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:09.084474  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.084903  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.084926  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.085173  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.085391  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.085588  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.085734  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.085868  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:09.086048  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:09.086067  145142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:25:09.337632  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:25:09.337662  145142 main.go:141] libmachine: Checking connection to Docker...
	I0719 04:25:09.337673  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetURL
	I0719 04:25:09.339132  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using libvirt version 6000000
	I0719 04:25:09.341688  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.342084  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.342115  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.342281  145142 main.go:141] libmachine: Docker is up and running!
	I0719 04:25:09.342298  145142 main.go:141] libmachine: Reticulating splines...
	I0719 04:25:09.342305  145142 client.go:171] duration metric: took 23.828219304s to LocalClient.Create
	I0719 04:25:09.342330  145142 start.go:167] duration metric: took 23.828288361s to libmachine.API.Create "ha-925161"
	I0719 04:25:09.342343  145142 start.go:293] postStartSetup for "ha-925161-m03" (driver="kvm2")
	I0719 04:25:09.342474  145142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:25:09.342510  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.342779  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:25:09.342803  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:09.345496  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.345835  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.345859  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.346014  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.346226  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.346405  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.346563  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:25:09.427161  145142 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:25:09.431042  145142 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:25:09.431066  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:25:09.431133  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:25:09.431203  145142 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:25:09.431216  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:25:09.431329  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:25:09.439889  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:25:09.461424  145142 start.go:296] duration metric: took 118.951136ms for postStartSetup
	I0719 04:25:09.461486  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetConfigRaw
	I0719 04:25:09.462127  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:25:09.464905  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.465308  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.465331  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.465615  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:25:09.465801  145142 start.go:128] duration metric: took 23.970556216s to createHost
	I0719 04:25:09.465825  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:09.468059  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.468371  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.468397  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.468510  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.468685  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.468857  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.469033  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.469239  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:09.469429  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:09.469440  145142 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:25:09.570447  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363109.550120349
	
	I0719 04:25:09.570473  145142 fix.go:216] guest clock: 1721363109.550120349
	I0719 04:25:09.570483  145142 fix.go:229] Guest: 2024-07-19 04:25:09.550120349 +0000 UTC Remote: 2024-07-19 04:25:09.465813538 +0000 UTC m=+159.718937610 (delta=84.306811ms)
	I0719 04:25:09.570503  145142 fix.go:200] guest clock delta is within tolerance: 84.306811ms
	I0719 04:25:09.570510  145142 start.go:83] releasing machines lock for "ha-925161-m03", held for 24.075380293s
	I0719 04:25:09.570534  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.570805  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:25:09.573667  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.574164  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.574203  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.576636  145142 out.go:177] * Found network options:
	I0719 04:25:09.578072  145142 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.102
	W0719 04:25:09.579382  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:25:09.579416  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:25:09.579434  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.580084  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.580346  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.580456  145142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:25:09.580496  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	W0719 04:25:09.580557  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:25:09.580586  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:25:09.580655  145142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:25:09.580678  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:09.583380  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.583405  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.583788  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.583813  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.583972  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.583996  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.583999  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.584193  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.584242  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.584387  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.584407  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.584637  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:25:09.584667  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.584813  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:25:09.816201  145142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:25:09.822223  145142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:25:09.822314  145142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:25:09.837919  145142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:25:09.837944  145142 start.go:495] detecting cgroup driver to use...
	I0719 04:25:09.838012  145142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:25:09.854894  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:25:09.868083  145142 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:25:09.868143  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:25:09.881305  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:25:09.894290  145142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:25:10.008511  145142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:25:10.148950  145142 docker.go:233] disabling docker service ...
	I0719 04:25:10.149020  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:25:10.163566  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:25:10.178022  145142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:25:10.334596  145142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:25:10.465736  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:25:10.478989  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:25:10.497102  145142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:25:10.497178  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.507362  145142 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:25:10.507440  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.517566  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.527265  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.536829  145142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:25:10.546961  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.556566  145142 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.572316  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.582162  145142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:25:10.591369  145142 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 04:25:10.591430  145142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 04:25:10.604198  145142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:25:10.613207  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:25:10.734874  145142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:25:10.870466  145142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:25:10.870545  145142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:25:10.875402  145142 start.go:563] Will wait 60s for crictl version
	I0719 04:25:10.875469  145142 ssh_runner.go:195] Run: which crictl
	I0719 04:25:10.879049  145142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:25:10.921854  145142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:25:10.921933  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:25:10.949193  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:25:10.977659  145142 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:25:10.979121  145142 out.go:177]   - env NO_PROXY=192.168.39.246
	I0719 04:25:10.980765  145142 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.102
	I0719 04:25:10.982367  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:25:10.985396  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:10.985955  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:10.985981  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:10.986209  145142 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:25:10.990177  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:25:11.001885  145142 mustload.go:65] Loading cluster: ha-925161
	I0719 04:25:11.002121  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:25:11.002450  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:25:11.002501  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:25:11.018736  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0719 04:25:11.019224  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:25:11.019696  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:25:11.019720  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:25:11.020042  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:25:11.020260  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:25:11.021841  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:25:11.022135  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:25:11.022170  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:25:11.037341  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I0719 04:25:11.037778  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:25:11.038254  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:25:11.038290  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:25:11.038574  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:25:11.038765  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:25:11.038954  145142 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161 for IP: 192.168.39.190
	I0719 04:25:11.038968  145142 certs.go:194] generating shared ca certs ...
	I0719 04:25:11.038987  145142 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:25:11.039124  145142 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:25:11.039188  145142 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:25:11.039202  145142 certs.go:256] generating profile certs ...
	I0719 04:25:11.039295  145142 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key
	I0719 04:25:11.039328  145142 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.84697c77
	I0719 04:25:11.039355  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.84697c77 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.102 192.168.39.190 192.168.39.254]
	I0719 04:25:11.567437  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.84697c77 ...
	I0719 04:25:11.567471  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.84697c77: {Name:mk373f1857bc49369966cfa39fe8c1a2e380ab66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:25:11.567658  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.84697c77 ...
	I0719 04:25:11.567672  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.84697c77: {Name:mkd1589f36926e43cc9ee20b274551dfc36ba7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:25:11.567745  145142 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.84697c77 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt
	I0719 04:25:11.567865  145142 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.84697c77 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key
	I0719 04:25:11.567989  145142 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key
	I0719 04:25:11.568005  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:25:11.568017  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:25:11.568030  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:25:11.568043  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:25:11.568055  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:25:11.568071  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:25:11.568083  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:25:11.568095  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:25:11.568144  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:25:11.568172  145142 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:25:11.568181  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:25:11.568204  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:25:11.568227  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:25:11.568247  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:25:11.568281  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:25:11.568351  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:25:11.568372  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:25:11.568384  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:25:11.568417  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:25:11.571552  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:25:11.571928  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:25:11.571964  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:25:11.572198  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:25:11.572464  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:25:11.572632  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:25:11.572782  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:25:11.645507  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 04:25:11.650229  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 04:25:11.661243  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 04:25:11.665650  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0719 04:25:11.681346  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 04:25:11.687467  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 04:25:11.698118  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 04:25:11.701925  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0719 04:25:11.712824  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 04:25:11.716812  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 04:25:11.726777  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 04:25:11.731335  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0719 04:25:11.741502  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:25:11.765620  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:25:11.789211  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:25:11.813083  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:25:11.838453  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0719 04:25:11.863963  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:25:11.888495  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:25:11.912939  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:25:11.935621  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:25:11.957513  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:25:11.980784  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:25:12.004296  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 04:25:12.020460  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0719 04:25:12.036721  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 04:25:12.052426  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0719 04:25:12.067790  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 04:25:12.084563  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0719 04:25:12.101359  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 04:25:12.117840  145142 ssh_runner.go:195] Run: openssl version
	I0719 04:25:12.123111  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:25:12.132876  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:25:12.136942  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:25:12.137008  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:25:12.142543  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:25:12.152054  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:25:12.161572  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:25:12.165628  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:25:12.165674  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:25:12.171083  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:25:12.182216  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:25:12.192475  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:25:12.196619  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:25:12.196682  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:25:12.201974  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:25:12.212165  145142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:25:12.215954  145142 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:25:12.216017  145142 kubeadm.go:934] updating node {m03 192.168.39.190 8443 v1.30.3 crio true true} ...
	I0719 04:25:12.216173  145142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-925161-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:25:12.216208  145142 kube-vip.go:115] generating kube-vip config ...
	I0719 04:25:12.216249  145142 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:25:12.232292  145142 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:25:12.232359  145142 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:25:12.232410  145142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:25:12.241087  145142 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 04:25:12.241153  145142 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 04:25:12.249989  145142 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0719 04:25:12.250028  145142 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 04:25:12.249989  145142 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0719 04:25:12.250039  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:25:12.250048  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:25:12.250052  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:25:12.250134  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:25:12.250134  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:25:12.254062  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 04:25:12.254092  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 04:25:12.273890  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 04:25:12.273940  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:25:12.273942  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 04:25:12.274129  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:25:12.329661  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 04:25:12.329720  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 04:25:13.105116  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 04:25:13.114745  145142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 04:25:13.130878  145142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:25:13.146901  145142 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:25:13.163498  145142 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:25:13.167247  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:25:13.180301  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:25:13.333576  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:25:13.350966  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:25:13.351327  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:25:13.351368  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:25:13.366893  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0719 04:25:13.367315  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:25:13.367879  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:25:13.367905  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:25:13.368277  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:25:13.368500  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:25:13.368660  145142 start.go:317] joinCluster: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:25:13.368829  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 04:25:13.368850  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:25:13.371895  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:25:13.372431  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:25:13.372461  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:25:13.372623  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:25:13.372827  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:25:13.372983  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:25:13.373168  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:25:13.533338  145142 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:25:13.533397  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0kajtd.tjg4friexfw44gr8 --discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-925161-m03 --control-plane --apiserver-advertise-address=192.168.39.190 --apiserver-bind-port=8443"
	I0719 04:25:37.396567  145142 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0kajtd.tjg4friexfw44gr8 --discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-925161-m03 --control-plane --apiserver-advertise-address=192.168.39.190 --apiserver-bind-port=8443": (23.863139662s)
	I0719 04:25:37.396608  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 04:25:38.006840  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-925161-m03 minikube.k8s.io/updated_at=2024_07_19T04_25_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-925161 minikube.k8s.io/primary=false
	I0719 04:25:38.124813  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-925161-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 04:25:38.236587  145142 start.go:319] duration metric: took 24.867922687s to joinCluster
	I0719 04:25:38.236685  145142 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:25:38.237022  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:25:38.238244  145142 out.go:177] * Verifying Kubernetes components...
	I0719 04:25:38.239737  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:25:38.483563  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:25:38.548096  145142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:25:38.548374  145142 kapi.go:59] client config for ha-925161: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key", CAFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 04:25:38.548437  145142 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0719 04:25:38.548683  145142 node_ready.go:35] waiting up to 6m0s for node "ha-925161-m03" to be "Ready" ...
	I0719 04:25:38.548763  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:38.548774  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:38.548785  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:38.548793  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:38.552631  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:39.049410  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:39.049435  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:39.049444  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:39.049450  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:39.053503  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:25:39.549845  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:39.549874  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:39.549885  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:39.549891  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:39.553566  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:40.049392  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:40.049418  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:40.049434  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:40.049438  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:40.052716  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:40.549235  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:40.549259  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:40.549270  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:40.549277  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:40.553259  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:40.553997  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:41.049228  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:41.049249  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:41.049261  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:41.049266  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:41.053031  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:41.549512  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:41.549533  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:41.549541  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:41.549546  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:41.553346  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:42.049652  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:42.049694  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:42.049710  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:42.049716  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:42.052936  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:42.549384  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:42.549404  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:42.549413  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:42.549418  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:42.554109  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:25:42.555084  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:43.049381  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:43.049407  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:43.049418  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:43.049426  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:43.052749  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:43.549940  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:43.549962  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:43.549970  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:43.549973  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:43.553484  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:44.049655  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:44.049689  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:44.049710  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:44.049717  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:44.052716  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:44.549744  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:44.549769  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:44.549779  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:44.549785  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:44.553660  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:45.048924  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:45.048948  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:45.048956  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:45.048960  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:45.052171  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:45.052951  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:45.549607  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:45.549632  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:45.549645  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:45.549651  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:45.553046  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:46.048833  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:46.048855  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:46.048863  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:46.048868  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:46.052096  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:46.549440  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:46.549464  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:46.549476  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:46.549482  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:46.552366  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:47.049236  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:47.049262  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:47.049275  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:47.049280  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:47.053113  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:47.053626  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:47.549474  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:47.549550  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:47.549566  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:47.549572  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:47.553971  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:25:48.048975  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:48.048998  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:48.049006  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:48.049010  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:48.052841  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:48.548896  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:48.548918  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:48.548926  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:48.548930  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:48.552539  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:49.049486  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:49.049507  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:49.049515  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:49.049519  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:49.052729  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:49.549738  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:49.549764  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:49.549776  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:49.549782  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:49.553116  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:49.553814  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:50.049901  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:50.049932  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:50.049944  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:50.049952  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:50.053305  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:50.549885  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:50.549908  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:50.549918  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:50.549923  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:50.553396  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:51.049280  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:51.049298  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:51.049310  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:51.049321  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:51.052449  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:51.549329  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:51.549354  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:51.549365  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:51.549370  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:51.552531  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:52.049876  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:52.049902  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:52.049914  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:52.049919  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:52.052842  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:52.053631  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:52.549220  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:52.549241  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:52.549250  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:52.549254  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:52.552348  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:53.049767  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:53.049790  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:53.049800  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:53.049804  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:53.053107  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:53.549332  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:53.549358  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:53.549369  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:53.549374  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:53.552631  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:54.049552  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:54.049574  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:54.049582  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:54.049586  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:54.052677  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:54.549757  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:54.549781  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:54.549792  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:54.549800  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:54.553100  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:54.553667  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:55.049799  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:55.049828  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:55.049839  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:55.049846  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:55.053891  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:25:55.549226  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:55.549244  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:55.549252  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:55.549256  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:55.552834  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.049339  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:56.049362  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.049374  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.049380  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.052933  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.053768  145142 node_ready.go:49] node "ha-925161-m03" has status "Ready":"True"
	I0719 04:25:56.053791  145142 node_ready.go:38] duration metric: took 17.505093181s for node "ha-925161-m03" to be "Ready" ...
	I0719 04:25:56.053801  145142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:25:56.053873  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:25:56.053884  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.053891  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.053898  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.060659  145142 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:25:56.067354  145142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.067437  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7wzcg
	I0719 04:25:56.067445  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.067452  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.067456  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.071268  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.072407  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.072420  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.072428  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.072432  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.075974  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.076571  145142 pod_ready.go:92] pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.076612  145142 pod_ready.go:81] duration metric: took 9.232088ms for pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.076625  145142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.076695  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hwdsq
	I0719 04:25:56.076707  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.076716  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.076722  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.079529  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:56.080117  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.080129  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.080136  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.080140  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.083662  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.084597  145142 pod_ready.go:92] pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.084614  145142 pod_ready.go:81] duration metric: took 7.983149ms for pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.084623  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.084676  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161
	I0719 04:25:56.084686  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.084703  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.084711  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.087849  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.088515  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.088531  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.088538  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.088542  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.092101  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.092552  145142 pod_ready.go:92] pod "etcd-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.092568  145142 pod_ready.go:81] duration metric: took 7.940039ms for pod "etcd-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.092576  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.092638  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161-m02
	I0719 04:25:56.092649  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.092658  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.092663  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.100570  145142 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:25:56.101216  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:56.101230  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.101237  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.101241  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.103631  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:56.104014  145142 pod_ready.go:92] pod "etcd-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.104030  145142 pod_ready.go:81] duration metric: took 11.448439ms for pod "etcd-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.104040  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.249352  145142 request.go:629] Waited for 145.229729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161-m03
	I0719 04:25:56.249425  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161-m03
	I0719 04:25:56.249430  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.249437  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.249443  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.252774  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.449798  145142 request.go:629] Waited for 196.362556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:56.449867  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:56.449874  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.449885  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.449892  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.453499  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.453990  145142 pod_ready.go:92] pod "etcd-ha-925161-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.454014  145142 pod_ready.go:81] duration metric: took 349.966859ms for pod "etcd-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.454038  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.650141  145142 request.go:629] Waited for 196.006293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161
	I0719 04:25:56.650212  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161
	I0719 04:25:56.650221  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.650232  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.650245  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.653688  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.849639  145142 request.go:629] Waited for 195.358648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.849732  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.849741  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.849750  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.849756  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.852822  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.853646  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.853674  145142 pod_ready.go:81] duration metric: took 399.623518ms for pod "kube-apiserver-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.853688  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.049570  145142 request.go:629] Waited for 195.803774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m02
	I0719 04:25:57.049672  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m02
	I0719 04:25:57.049684  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.049696  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.049707  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.053372  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:57.250260  145142 request.go:629] Waited for 196.267735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:57.250336  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:57.250348  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.250359  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.250369  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.253523  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:57.253994  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:57.254013  145142 pod_ready.go:81] duration metric: took 400.316599ms for pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.254025  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.449485  145142 request.go:629] Waited for 195.37046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m03
	I0719 04:25:57.449558  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m03
	I0719 04:25:57.449570  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.449580  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.449589  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.453549  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:57.649581  145142 request.go:629] Waited for 195.278712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:57.649652  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:57.649660  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.649670  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.649674  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.652290  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:57.652835  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:57.652857  145142 pod_ready.go:81] duration metric: took 398.823668ms for pod "kube-apiserver-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.652869  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.849748  145142 request.go:629] Waited for 196.791111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161
	I0719 04:25:57.849824  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161
	I0719 04:25:57.849829  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.849835  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.849840  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.853222  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.050339  145142 request.go:629] Waited for 196.349823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:58.050422  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:58.050430  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.050437  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.050443  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.053777  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.054507  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:58.054526  145142 pod_ready.go:81] duration metric: took 401.64792ms for pod "kube-controller-manager-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.054538  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.249660  145142 request.go:629] Waited for 195.049698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m02
	I0719 04:25:58.249723  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m02
	I0719 04:25:58.249729  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.249737  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.249740  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.252894  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.450122  145142 request.go:629] Waited for 196.378279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:58.450213  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:58.450224  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.450242  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.450253  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.454020  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.454596  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:58.454615  145142 pod_ready.go:81] duration metric: took 400.070348ms for pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.454625  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.649796  145142 request.go:629] Waited for 195.085408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m03
	I0719 04:25:58.649856  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m03
	I0719 04:25:58.649862  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.649870  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.649874  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.653446  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.850187  145142 request.go:629] Waited for 195.248482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:58.850262  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:58.850273  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.850283  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.850291  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.853704  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.854276  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:58.854293  145142 pod_ready.go:81] duration metric: took 399.662625ms for pod "kube-controller-manager-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.854303  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8dbqt" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.049918  145142 request.go:629] Waited for 195.537406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dbqt
	I0719 04:25:59.050021  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dbqt
	I0719 04:25:59.050033  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.050041  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.050047  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.053229  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:59.249473  145142 request.go:629] Waited for 195.302433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:59.249544  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:59.249551  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.249561  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.249569  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.252622  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:59.253355  145142 pod_ready.go:92] pod "kube-proxy-8dbqt" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:59.253377  145142 pod_ready.go:81] duration metric: took 399.064103ms for pod "kube-proxy-8dbqt" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.253390  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6526" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.450380  145142 request.go:629] Waited for 196.900848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6526
	I0719 04:25:59.450449  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6526
	I0719 04:25:59.450455  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.450462  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.450466  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.453905  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:59.650183  145142 request.go:629] Waited for 195.38685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:59.650242  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:59.650248  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.650258  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.650264  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.653782  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:59.654347  145142 pod_ready.go:92] pod "kube-proxy-j6526" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:59.654365  145142 pod_ready.go:81] duration metric: took 400.967227ms for pod "kube-proxy-j6526" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.654382  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6df4" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.849901  145142 request.go:629] Waited for 195.426207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6df4
	I0719 04:25:59.849976  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6df4
	I0719 04:25:59.849987  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.850001  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.850008  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.853528  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.049577  145142 request.go:629] Waited for 195.405633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:26:00.049648  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:26:00.049654  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.049662  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.049669  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.052959  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.053718  145142 pod_ready.go:92] pod "kube-proxy-s6df4" in "kube-system" namespace has status "Ready":"True"
	I0719 04:26:00.053739  145142 pod_ready.go:81] duration metric: took 399.346448ms for pod "kube-proxy-s6df4" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.053751  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.249858  145142 request.go:629] Waited for 196.008753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161
	I0719 04:26:00.249916  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161
	I0719 04:26:00.249921  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.249928  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.249932  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.253095  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.450275  145142 request.go:629] Waited for 196.238184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:26:00.450340  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:26:00.450348  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.450356  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.450360  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.453607  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.454212  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:26:00.454229  145142 pod_ready.go:81] duration metric: took 400.471839ms for pod "kube-scheduler-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.454239  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.649891  145142 request.go:629] Waited for 195.574792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m02
	I0719 04:26:00.649989  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m02
	I0719 04:26:00.649998  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.650010  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.650017  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.653707  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.849941  145142 request.go:629] Waited for 195.367136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:26:00.849999  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:26:00.850004  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.850012  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.850017  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.854122  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:26:00.854897  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:26:00.854921  145142 pod_ready.go:81] duration metric: took 400.674776ms for pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.854936  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:01.049976  145142 request.go:629] Waited for 194.971665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m03
	I0719 04:26:01.050039  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m03
	I0719 04:26:01.050045  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.050051  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.050055  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.053846  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:01.249793  145142 request.go:629] Waited for 195.310307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:26:01.249889  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:26:01.249900  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.249912  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.249923  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.253321  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:01.253857  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:26:01.253875  145142 pod_ready.go:81] duration metric: took 398.932004ms for pod "kube-scheduler-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:01.253887  145142 pod_ready.go:38] duration metric: took 5.20007621s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:26:01.253902  145142 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:26:01.253961  145142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:26:01.270775  145142 api_server.go:72] duration metric: took 23.034046733s to wait for apiserver process to appear ...
	I0719 04:26:01.270799  145142 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:26:01.270816  145142 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0719 04:26:01.275256  145142 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0719 04:26:01.275344  145142 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0719 04:26:01.275355  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.275368  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.275378  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.276552  145142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 04:26:01.276638  145142 api_server.go:141] control plane version: v1.30.3
	I0719 04:26:01.276659  145142 api_server.go:131] duration metric: took 5.852592ms to wait for apiserver health ...
	I0719 04:26:01.276668  145142 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:26:01.450105  145142 request.go:629] Waited for 173.348425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:26:01.450177  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:26:01.450182  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.450190  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.450195  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.457087  145142 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:26:01.463884  145142 system_pods.go:59] 24 kube-system pods found
	I0719 04:26:01.463912  145142 system_pods.go:61] "coredns-7db6d8ff4d-7wzcg" [a434f69a-903d-4961-a54c-9a85cbc694b1] Running
	I0719 04:26:01.463919  145142 system_pods.go:61] "coredns-7db6d8ff4d-hwdsq" [894f9528-78da-4cae-9ec6-8e82a3e73264] Running
	I0719 04:26:01.463923  145142 system_pods.go:61] "etcd-ha-925161" [35b14af9-6e7d-4e5c-8c43-fa427109cde3] Running
	I0719 04:26:01.463926  145142 system_pods.go:61] "etcd-ha-925161-m02" [51f60536-03dc-4426-ac13-9d2ec33275f7] Running
	I0719 04:26:01.463930  145142 system_pods.go:61] "etcd-ha-925161-m03" [5d9cecc3-377d-401f-8d53-a70e7d31ccce] Running
	I0719 04:26:01.463933  145142 system_pods.go:61] "kindnet-7gvt6" [3980fcc1-695c-4b62-aab6-93872f4ddc11] Running
	I0719 04:26:01.463937  145142 system_pods.go:61] "kindnet-dkctc" [4ec93698-4a91-44fa-a37f-405bf1a5fa95] Running
	I0719 04:26:01.463940  145142 system_pods.go:61] "kindnet-fsr5f" [988e1118-927a-4468-ba25-3a78d8d06919] Running
	I0719 04:26:01.463945  145142 system_pods.go:61] "kube-apiserver-ha-925161" [1c56f8e6-beb8-4dcc-ba56-5097516043a6] Running
	I0719 04:26:01.463951  145142 system_pods.go:61] "kube-apiserver-ha-925161-m02" [ceaa5f20-d023-482a-9905-54f8bc47da20] Running
	I0719 04:26:01.463954  145142 system_pods.go:61] "kube-apiserver-ha-925161-m03" [3c4984d6-1059-4195-ac82-81a271623c04] Running
	I0719 04:26:01.463960  145142 system_pods.go:61] "kube-controller-manager-ha-925161" [337e75e4-92e9-48fd-a46a-73ce174b4995] Running
	I0719 04:26:01.463963  145142 system_pods.go:61] "kube-controller-manager-ha-925161-m02" [d2d234a3-a18f-4618-9b77-4bcf771463b8] Running
	I0719 04:26:01.463969  145142 system_pods.go:61] "kube-controller-manager-ha-925161-m03" [63e944cd-c1b1-41dc-9fd5-3ad11af12f8b] Running
	I0719 04:26:01.463971  145142 system_pods.go:61] "kube-proxy-8dbqt" [cd11aac3-62df-4603-8102-3384bcc100f1] Running
	I0719 04:26:01.463974  145142 system_pods.go:61] "kube-proxy-j6526" [20b69c28-de0f-4ed7-846c-848d9e938c46] Running
	I0719 04:26:01.463977  145142 system_pods.go:61] "kube-proxy-s6df4" [3373d2d8-4189-48a0-aefc-2ad0511b2a6b] Running
	I0719 04:26:01.463981  145142 system_pods.go:61] "kube-scheduler-ha-925161" [6c1c9f30-93c9-4def-b54e-97b8e27cd12b] Running
	I0719 04:26:01.463984  145142 system_pods.go:61] "kube-scheduler-ha-925161-m02" [60ea2e22-0456-40bc-bddd-32b6737350b3] Running
	I0719 04:26:01.463986  145142 system_pods.go:61] "kube-scheduler-ha-925161-m03" [16e97f9c-20d3-4c3a-988c-b3fce5955407] Running
	I0719 04:26:01.463990  145142 system_pods.go:61] "kube-vip-ha-925161" [8d01a874-336e-476c-b079-852250b3bbcd] Running
	I0719 04:26:01.463994  145142 system_pods.go:61] "kube-vip-ha-925161-m02" [0cb6b1ed-566b-4f64-903b-5af108816970] Running
	I0719 04:26:01.463997  145142 system_pods.go:61] "kube-vip-ha-925161-m03" [0dc7d41b-900e-4d18-9692-c363d4e46dac] Running
	I0719 04:26:01.464001  145142 system_pods.go:61] "storage-provisioner" [bf27da3d-f736-4742-9af5-2c0a024075ec] Running
	I0719 04:26:01.464006  145142 system_pods.go:74] duration metric: took 187.333411ms to wait for pod list to return data ...
	I0719 04:26:01.464021  145142 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:26:01.649422  145142 request.go:629] Waited for 185.324586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:26:01.649484  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:26:01.649490  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.649500  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.649511  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.652810  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:01.652963  145142 default_sa.go:45] found service account: "default"
	I0719 04:26:01.652982  145142 default_sa.go:55] duration metric: took 188.951369ms for default service account to be created ...
	I0719 04:26:01.652996  145142 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:26:01.850280  145142 request.go:629] Waited for 197.193378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:26:01.850361  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:26:01.850374  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.850385  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.850391  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.884097  145142 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0719 04:26:01.890498  145142 system_pods.go:86] 24 kube-system pods found
	I0719 04:26:01.890529  145142 system_pods.go:89] "coredns-7db6d8ff4d-7wzcg" [a434f69a-903d-4961-a54c-9a85cbc694b1] Running
	I0719 04:26:01.890536  145142 system_pods.go:89] "coredns-7db6d8ff4d-hwdsq" [894f9528-78da-4cae-9ec6-8e82a3e73264] Running
	I0719 04:26:01.890543  145142 system_pods.go:89] "etcd-ha-925161" [35b14af9-6e7d-4e5c-8c43-fa427109cde3] Running
	I0719 04:26:01.890548  145142 system_pods.go:89] "etcd-ha-925161-m02" [51f60536-03dc-4426-ac13-9d2ec33275f7] Running
	I0719 04:26:01.890555  145142 system_pods.go:89] "etcd-ha-925161-m03" [5d9cecc3-377d-401f-8d53-a70e7d31ccce] Running
	I0719 04:26:01.890561  145142 system_pods.go:89] "kindnet-7gvt6" [3980fcc1-695c-4b62-aab6-93872f4ddc11] Running
	I0719 04:26:01.890566  145142 system_pods.go:89] "kindnet-dkctc" [4ec93698-4a91-44fa-a37f-405bf1a5fa95] Running
	I0719 04:26:01.890572  145142 system_pods.go:89] "kindnet-fsr5f" [988e1118-927a-4468-ba25-3a78d8d06919] Running
	I0719 04:26:01.890577  145142 system_pods.go:89] "kube-apiserver-ha-925161" [1c56f8e6-beb8-4dcc-ba56-5097516043a6] Running
	I0719 04:26:01.890584  145142 system_pods.go:89] "kube-apiserver-ha-925161-m02" [ceaa5f20-d023-482a-9905-54f8bc47da20] Running
	I0719 04:26:01.890590  145142 system_pods.go:89] "kube-apiserver-ha-925161-m03" [3c4984d6-1059-4195-ac82-81a271623c04] Running
	I0719 04:26:01.890597  145142 system_pods.go:89] "kube-controller-manager-ha-925161" [337e75e4-92e9-48fd-a46a-73ce174b4995] Running
	I0719 04:26:01.890607  145142 system_pods.go:89] "kube-controller-manager-ha-925161-m02" [d2d234a3-a18f-4618-9b77-4bcf771463b8] Running
	I0719 04:26:01.890613  145142 system_pods.go:89] "kube-controller-manager-ha-925161-m03" [63e944cd-c1b1-41dc-9fd5-3ad11af12f8b] Running
	I0719 04:26:01.890620  145142 system_pods.go:89] "kube-proxy-8dbqt" [cd11aac3-62df-4603-8102-3384bcc100f1] Running
	I0719 04:26:01.890629  145142 system_pods.go:89] "kube-proxy-j6526" [20b69c28-de0f-4ed7-846c-848d9e938c46] Running
	I0719 04:26:01.890638  145142 system_pods.go:89] "kube-proxy-s6df4" [3373d2d8-4189-48a0-aefc-2ad0511b2a6b] Running
	I0719 04:26:01.890648  145142 system_pods.go:89] "kube-scheduler-ha-925161" [6c1c9f30-93c9-4def-b54e-97b8e27cd12b] Running
	I0719 04:26:01.890654  145142 system_pods.go:89] "kube-scheduler-ha-925161-m02" [60ea2e22-0456-40bc-bddd-32b6737350b3] Running
	I0719 04:26:01.890659  145142 system_pods.go:89] "kube-scheduler-ha-925161-m03" [16e97f9c-20d3-4c3a-988c-b3fce5955407] Running
	I0719 04:26:01.890666  145142 system_pods.go:89] "kube-vip-ha-925161" [8d01a874-336e-476c-b079-852250b3bbcd] Running
	I0719 04:26:01.890670  145142 system_pods.go:89] "kube-vip-ha-925161-m02" [0cb6b1ed-566b-4f64-903b-5af108816970] Running
	I0719 04:26:01.890674  145142 system_pods.go:89] "kube-vip-ha-925161-m03" [0dc7d41b-900e-4d18-9692-c363d4e46dac] Running
	I0719 04:26:01.890680  145142 system_pods.go:89] "storage-provisioner" [bf27da3d-f736-4742-9af5-2c0a024075ec] Running
	I0719 04:26:01.890690  145142 system_pods.go:126] duration metric: took 237.684394ms to wait for k8s-apps to be running ...
	I0719 04:26:01.890700  145142 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:26:01.890747  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:26:01.910434  145142 system_svc.go:56] duration metric: took 19.724775ms WaitForService to wait for kubelet
	I0719 04:26:01.910462  145142 kubeadm.go:582] duration metric: took 23.673736861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:26:01.910482  145142 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:26:02.049873  145142 request.go:629] Waited for 139.294558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0719 04:26:02.049930  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0719 04:26:02.049936  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:02.049943  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:02.049949  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:02.053903  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:02.055081  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:26:02.055102  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:26:02.055114  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:26:02.055117  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:26:02.055121  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:26:02.055124  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:26:02.055127  145142 node_conditions.go:105] duration metric: took 144.641214ms to run NodePressure ...
	I0719 04:26:02.055138  145142 start.go:241] waiting for startup goroutines ...
	I0719 04:26:02.055157  145142 start.go:255] writing updated cluster config ...
	I0719 04:26:02.055529  145142 ssh_runner.go:195] Run: rm -f paused
	I0719 04:26:02.109185  145142 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 04:26:02.111352  145142 out.go:177] * Done! kubectl is now configured to use "ha-925161" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.422831272Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363430422806255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e84196b7-36a8-4142-8b74-c2e85bdcaa53 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.423373470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1814107-5b9b-4b6f-88e8-659ad3b1a1ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.423421521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1814107-5b9b-4b6f-88e8-659ad3b1a1ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.423640423Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363166324611006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015205884262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015144650485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c755e5c5cff44f8a7c38a73192c243bbcdb84c3f5da3847d21531941a8b95d93,PodSandboxId:40cd7297d1d53fed31be961d6e39847b14d8d75a0e4eca3b0c9b05a3cec7ac54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721363015082766923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213630
03130579717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363002828843500,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412,PodSandboxId:42a74695a301994a8fe69f505b946596a45928011a694e7f458b0030c12c6c11,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721362985963244969,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eda1524f631b786182d69b02283573f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721362982966094602,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721362982930573061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012,PodSandboxId:5deb82997eca5aa2cd0fcbe3083dd4d824032623e4e1727dd40d362c5defc745,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721362982917267786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513,PodSandboxId:a1d0203f57600d7f98a4d21b8e859ad53d31a54211458e99baede150d4f27f62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721362982883247583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1814107-5b9b-4b6f-88e8-659ad3b1a1ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.463376357Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e92e624-0d84-4551-92ed-a00aa31040cb name=/runtime.v1.RuntimeService/Version
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.464094038Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e92e624-0d84-4551-92ed-a00aa31040cb name=/runtime.v1.RuntimeService/Version
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.465168314Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a16bb84-9ebb-4701-8ed3-cdbd0fbb0579 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.465603276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363430465580038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a16bb84-9ebb-4701-8ed3-cdbd0fbb0579 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.466068904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19b927c4-14a9-4f53-ae98-e9f20a17d40e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.466117977Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19b927c4-14a9-4f53-ae98-e9f20a17d40e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.466339695Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363166324611006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015205884262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015144650485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c755e5c5cff44f8a7c38a73192c243bbcdb84c3f5da3847d21531941a8b95d93,PodSandboxId:40cd7297d1d53fed31be961d6e39847b14d8d75a0e4eca3b0c9b05a3cec7ac54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721363015082766923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213630
03130579717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363002828843500,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412,PodSandboxId:42a74695a301994a8fe69f505b946596a45928011a694e7f458b0030c12c6c11,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721362985963244969,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eda1524f631b786182d69b02283573f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721362982966094602,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721362982930573061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012,PodSandboxId:5deb82997eca5aa2cd0fcbe3083dd4d824032623e4e1727dd40d362c5defc745,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721362982917267786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513,PodSandboxId:a1d0203f57600d7f98a4d21b8e859ad53d31a54211458e99baede150d4f27f62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721362982883247583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19b927c4-14a9-4f53-ae98-e9f20a17d40e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.501007259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd33eae8-5abb-44fc-9c35-f2ad9cacbc45 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.501243373Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd33eae8-5abb-44fc-9c35-f2ad9cacbc45 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.502015486Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=865a1ec7-e320-44b5-a8a1-660574deedcb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.502575025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363430502550509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=865a1ec7-e320-44b5-a8a1-660574deedcb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.503245902Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b1eb05c-3d43-4ba9-8a96-4212b5e21869 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.503336312Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b1eb05c-3d43-4ba9-8a96-4212b5e21869 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.503588904Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363166324611006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015205884262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015144650485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c755e5c5cff44f8a7c38a73192c243bbcdb84c3f5da3847d21531941a8b95d93,PodSandboxId:40cd7297d1d53fed31be961d6e39847b14d8d75a0e4eca3b0c9b05a3cec7ac54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721363015082766923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213630
03130579717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363002828843500,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412,PodSandboxId:42a74695a301994a8fe69f505b946596a45928011a694e7f458b0030c12c6c11,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721362985963244969,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eda1524f631b786182d69b02283573f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721362982966094602,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721362982930573061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012,PodSandboxId:5deb82997eca5aa2cd0fcbe3083dd4d824032623e4e1727dd40d362c5defc745,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721362982917267786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513,PodSandboxId:a1d0203f57600d7f98a4d21b8e859ad53d31a54211458e99baede150d4f27f62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721362982883247583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b1eb05c-3d43-4ba9-8a96-4212b5e21869 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.538352538Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0cb033b5-3005-460b-a4ce-e0b8290c773e name=/runtime.v1.RuntimeService/Version
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.538425139Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0cb033b5-3005-460b-a4ce-e0b8290c773e name=/runtime.v1.RuntimeService/Version
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.539497841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=640eb05a-243a-4820-80dd-d55313c2464b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.539888504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363430539867319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=640eb05a-243a-4820-80dd-d55313c2464b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.540718962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36471426-cb51-4dae-8d8f-3318de5a0007 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.540784981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36471426-cb51-4dae-8d8f-3318de5a0007 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:30:30 ha-925161 crio[682]: time="2024-07-19 04:30:30.541133614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363166324611006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015205884262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015144650485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c755e5c5cff44f8a7c38a73192c243bbcdb84c3f5da3847d21531941a8b95d93,PodSandboxId:40cd7297d1d53fed31be961d6e39847b14d8d75a0e4eca3b0c9b05a3cec7ac54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721363015082766923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213630
03130579717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363002828843500,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412,PodSandboxId:42a74695a301994a8fe69f505b946596a45928011a694e7f458b0030c12c6c11,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721362985963244969,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eda1524f631b786182d69b02283573f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721362982966094602,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721362982930573061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012,PodSandboxId:5deb82997eca5aa2cd0fcbe3083dd4d824032623e4e1727dd40d362c5defc745,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721362982917267786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513,PodSandboxId:a1d0203f57600d7f98a4d21b8e859ad53d31a54211458e99baede150d4f27f62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721362982883247583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36471426-cb51-4dae-8d8f-3318de5a0007 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	376dac90130c2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   0d44fb43a7c0f       busybox-fc5497c4f-xjdg9
	f8fbd19dd4d99       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0bb04d64362d6       coredns-7db6d8ff4d-hwdsq
	14f21e70e6b65       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   62bcd5e2d22cb       coredns-7db6d8ff4d-7wzcg
	c755e5c5cff44       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   40cd7297d1d53       storage-provisioner
	1109d10f2b3d4       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      7 minutes ago       Running             kindnet-cni               0                   b3c277ef1f53b       kindnet-fsr5f
	6c9e12889a166       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   696364d98fd5c       kube-proxy-8dbqt
	ae55b7f5bd7bf       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   42a74695a3019       kube-vip-ha-925161
	eeef22350ca0f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   fa3836c68c71d       kube-scheduler-ha-925161
	b041f48cc90cf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   a03be60cf1fe9       etcd-ha-925161
	6794bae567b7e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   5deb82997eca5       kube-apiserver-ha-925161
	882ed073edd75       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   a1d0203f57600       kube-controller-manager-ha-925161
	
	
	==> coredns [14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691] <==
	[INFO] 10.244.0.4:60754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129059s
	[INFO] 10.244.0.4:43447 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000075335s
	[INFO] 10.244.0.4:60737 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000497893s
	[INFO] 10.244.0.4:51122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001603238s
	[INFO] 10.244.1.2:37547 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201994s
	[INFO] 10.244.1.2:41971 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00346851s
	[INFO] 10.244.1.2:57720 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114773s
	[INFO] 10.244.2.3:58305 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001754058s
	[INFO] 10.244.2.3:54206 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118435s
	[INFO] 10.244.2.3:37056 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234861s
	[INFO] 10.244.2.3:45425 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073142s
	[INFO] 10.244.0.4:54647 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007602s
	[INFO] 10.244.0.4:33742 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001338144s
	[INFO] 10.244.1.2:58214 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123014s
	[INFO] 10.244.1.2:58591 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083326s
	[INFO] 10.244.1.2:33227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196172s
	[INFO] 10.244.2.3:49582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115766s
	[INFO] 10.244.2.3:46761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109526s
	[INFO] 10.244.0.4:50248 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066399s
	[INFO] 10.244.1.2:45766 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012847s
	[INFO] 10.244.1.2:57759 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145394s
	[INFO] 10.244.2.3:50037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160043s
	[INFO] 10.244.2.3:49469 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075305s
	[INFO] 10.244.2.3:39504 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000057986s
	[INFO] 10.244.0.4:39098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096095s
	
	
	==> coredns [f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672] <==
	[INFO] 10.244.1.2:34010 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219789s
	[INFO] 10.244.1.2:47044 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126724s
	[INFO] 10.244.1.2:42035 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109926s
	[INFO] 10.244.2.3:42792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145146s
	[INFO] 10.244.2.3:38794 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083694s
	[INFO] 10.244.2.3:48698 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001253504s
	[INFO] 10.244.2.3:45424 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060715s
	[INFO] 10.244.0.4:53435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016485s
	[INFO] 10.244.0.4:47050 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790838s
	[INFO] 10.244.0.4:38074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058109s
	[INFO] 10.244.0.4:53487 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066861s
	[INFO] 10.244.0.4:48230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012907s
	[INFO] 10.244.0.4:45713 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053151s
	[INFO] 10.244.1.2:40224 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119446s
	[INFO] 10.244.2.3:48643 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101063s
	[INFO] 10.244.2.3:59393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008526s
	[INFO] 10.244.0.4:38457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103892s
	[INFO] 10.244.0.4:36242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015645s
	[INFO] 10.244.0.4:47871 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076477s
	[INFO] 10.244.1.2:44263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176905s
	[INFO] 10.244.1.2:56297 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215661s
	[INFO] 10.244.2.3:45341 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148843s
	[INFO] 10.244.0.4:41990 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105346s
	[INFO] 10.244.0.4:43204 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121535s
	[INFO] 10.244.0.4:60972 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251518s
	
	
	==> describe nodes <==
	Name:               ha-925161
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_23_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:23:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:30:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:26:12 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:26:12 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:26:12 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:26:12 +0000   Fri, 19 Jul 2024 04:23:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-925161
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff8c87164fa44c4f827d29ad58165cee
	  System UUID:                ff8c8716-4fa4-4c4f-827d-29ad58165cee
	  Boot ID:                    82d231ce-d7a6-41a1-a656-2e7410a6f84c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xjdg9              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 coredns-7db6d8ff4d-7wzcg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m8s
	  kube-system                 coredns-7db6d8ff4d-hwdsq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m8s
	  kube-system                 etcd-ha-925161                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m21s
	  kube-system                 kindnet-fsr5f                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m9s
	  kube-system                 kube-apiserver-ha-925161             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-controller-manager-ha-925161    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-proxy-8dbqt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-scheduler-ha-925161             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-vip-ha-925161                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m7s   kube-proxy       
	  Normal  Starting                 7m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m21s  kubelet          Node ha-925161 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m21s  kubelet          Node ha-925161 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m21s  kubelet          Node ha-925161 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m9s   node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal  NodeReady                6m56s  kubelet          Node ha-925161 status is now: NodeReady
	  Normal  RegisteredNode           5m53s  node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal  RegisteredNode           4m38s  node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	
	
	Name:               ha-925161-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_24_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:24:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:28:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 04:26:22 +0000   Fri, 19 Jul 2024 04:28:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 04:26:22 +0000   Fri, 19 Jul 2024 04:28:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 04:26:22 +0000   Fri, 19 Jul 2024 04:28:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 04:26:22 +0000   Fri, 19 Jul 2024 04:28:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-925161-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9158ff8415464fc08c01f2344e6694f7
	  System UUID:                9158ff84-1546-4fc0-8c01-f2344e6694f7
	  Boot ID:                    94533959-ddf8-4bdd-b493-22c20551603d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5785p                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 etcd-ha-925161-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kindnet-dkctc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m10s
	  kube-system                 kube-apiserver-ha-925161-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-controller-manager-ha-925161-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-proxy-s6df4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-scheduler-ha-925161-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-vip-ha-925161-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  6m10s (x8 over 6m10s)  kubelet          Node ha-925161-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s (x8 over 6m10s)  kubelet          Node ha-925161-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s (x7 over 6m10s)  kubelet          Node ha-925161-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           4m38s                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-925161-m02 status is now: NodeNotReady
	
	
	Name:               ha-925161-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_25_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:25:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:30:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:26:36 +0000   Fri, 19 Jul 2024 04:25:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:26:36 +0000   Fri, 19 Jul 2024 04:25:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:26:36 +0000   Fri, 19 Jul 2024 04:25:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:26:36 +0000   Fri, 19 Jul 2024 04:25:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    ha-925161-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e81f7ca95c24874b7c002cc8e188173
	  System UUID:                3e81f7ca-95c2-4874-b7c0-02cc8e188173
	  Boot ID:                    b4cf88f1-2acb-4810-bae4-c71b13ffc20e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t2m4d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 etcd-ha-925161-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m54s
	  kube-system                 kindnet-7gvt6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m56s
	  kube-system                 kube-apiserver-ha-925161-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-controller-manager-ha-925161-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-j6526                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-scheduler-ha-925161-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-vip-ha-925161-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m56s (x8 over 4m56s)  kubelet          Node ha-925161-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x8 over 4m56s)  kubelet          Node ha-925161-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x7 over 4m56s)  kubelet          Node ha-925161-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	  Normal  RegisteredNode           4m38s                  node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	
	
	Name:               ha-925161-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_27_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:27:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:30:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:28:00 +0000   Fri, 19 Jul 2024 04:27:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:28:00 +0000   Fri, 19 Jul 2024 04:27:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:28:00 +0000   Fri, 19 Jul 2024 04:27:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:28:00 +0000   Fri, 19 Jul 2024 04:27:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.75
	  Hostname:    ha-925161-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e08274d02fa64707986686183076854f
	  System UUID:                e08274d0-2fa6-4707-9866-86183076854f
	  Boot ID:                    efd3e24c-8ce7-42df-8dd5-30a44f998179
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dnwxp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-f4fgd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x3 over 3m1s)  kubelet          Node ha-925161-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x3 over 3m1s)  kubelet          Node ha-925161-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x3 over 3m1s)  kubelet          Node ha-925161-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-925161-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul19 04:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050649] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037163] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.426710] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.747525] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.441980] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.442247] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.062592] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054468] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.195426] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.118864] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.257746] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.980513] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[Jul19 04:23] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.065569] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.069928] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.091097] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.840611] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.224120] kauditd_printk_skb: 38 callbacks suppressed
	[Jul19 04:24] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010] <==
	{"level":"warn","ts":"2024-07-19T04:30:30.824684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.827472Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.828155Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.83017Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.836016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.840202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.845589Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.853704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.862998Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.867324Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.868065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.877656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.881589Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.884221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.893135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.899372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.906008Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.909852Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.913299Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.918486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.925788Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.937716Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.962905Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.971246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:30:30.972738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 04:30:31 up 7 min,  0 users,  load average: 0.25, 0.20, 0.11
	Linux ha-925161 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036] <==
	I0719 04:29:54.195751       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:30:04.194629       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:30:04.194666       1 main.go:303] handling current node
	I0719 04:30:04.194681       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:30:04.194686       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:30:04.194824       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:30:04.194847       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:30:04.194910       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:30:04.194916       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:30:14.203623       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:30:14.203778       1 main.go:303] handling current node
	I0719 04:30:14.203813       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:30:14.203834       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:30:14.204082       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:30:14.204119       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:30:14.204197       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:30:14.204216       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:30:24.195276       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:30:24.195404       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:30:24.195633       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:30:24.195663       1 main.go:303] handling current node
	I0719 04:30:24.195697       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:30:24.195705       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:30:24.195783       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:30:24.195808       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012] <==
	I0719 04:23:07.455068       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0719 04:23:07.460827       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I0719 04:23:07.461717       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 04:23:07.466412       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 04:23:07.763875       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 04:23:09.195985       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 04:23:09.221533       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 04:23:09.235107       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 04:23:21.771412       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 04:23:21.881186       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0719 04:26:59.223684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42422: use of closed network connection
	E0719 04:26:59.417925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42438: use of closed network connection
	E0719 04:26:59.776835       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42488: use of closed network connection
	E0719 04:26:59.955113       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42502: use of closed network connection
	E0719 04:27:00.136541       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42532: use of closed network connection
	E0719 04:27:00.339873       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42552: use of closed network connection
	E0719 04:27:00.525493       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42564: use of closed network connection
	E0719 04:27:00.694817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42588: use of closed network connection
	E0719 04:27:01.006092       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42618: use of closed network connection
	E0719 04:27:01.188324       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42634: use of closed network connection
	E0719 04:27:01.374442       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42650: use of closed network connection
	E0719 04:27:01.546875       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42662: use of closed network connection
	E0719 04:27:01.720064       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42674: use of closed network connection
	E0719 04:27:01.898991       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42706: use of closed network connection
	W0719 04:28:27.475164       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.190 192.168.39.246]
	
	
	==> kube-controller-manager [882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513] <==
	I0719 04:26:03.406414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.175077ms"
	I0719 04:26:03.431612       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.518832ms"
	I0719 04:26:03.431750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.533µs"
	I0719 04:26:03.568596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.845964ms"
	E0719 04:26:03.568628       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0719 04:26:03.568710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.525µs"
	I0719 04:26:03.575124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.506µs"
	I0719 04:26:04.683397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.891µs"
	I0719 04:26:06.870145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.991928ms"
	I0719 04:26:06.870711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.023µs"
	I0719 04:26:06.966996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.716895ms"
	I0719 04:26:06.967301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.612µs"
	I0719 04:26:08.700595       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.120877ms"
	I0719 04:26:08.700847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.911µs"
	I0719 04:26:37.073643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.391µs"
	I0719 04:26:38.035158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.715µs"
	I0719 04:26:38.055651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.58µs"
	I0719 04:26:38.065132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.611µs"
	I0719 04:27:29.839844       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-925161-m04\" does not exist"
	I0719 04:27:29.872312       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-925161-m04" podCIDRs=["10.244.3.0/24"]
	I0719 04:27:31.298051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-925161-m04"
	I0719 04:27:49.928802       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-925161-m04"
	I0719 04:28:46.337379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-925161-m04"
	I0719 04:28:46.465735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.522562ms"
	I0719 04:28:46.468259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.595µs"
	
	
	==> kube-proxy [6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6] <==
	I0719 04:23:23.013567       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:23:23.037502       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	I0719 04:23:23.076100       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:23:23.076198       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:23:23.076252       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:23:23.080405       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:23:23.081098       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:23:23.081123       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:23:23.083190       1 config.go:192] "Starting service config controller"
	I0719 04:23:23.083504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:23:23.083558       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:23:23.083576       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:23:23.084640       1 config.go:319] "Starting node config controller"
	I0719 04:23:23.084667       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:23:23.184399       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:23:23.184522       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:23:23.184817       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23] <==
	W0719 04:23:07.117760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 04:23:07.117890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:23:07.179619       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 04:23:07.179713       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 04:23:10.118015       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 04:25:34.802812       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7gvt6\": pod kindnet-7gvt6 is already assigned to node \"ha-925161-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-7gvt6" node="ha-925161-m03"
	E0719 04:25:34.803093       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3980fcc1-695c-4b62-aab6-93872f4ddc11(kube-system/kindnet-7gvt6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7gvt6"
	E0719 04:25:34.803142       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7gvt6\": pod kindnet-7gvt6 is already assigned to node \"ha-925161-m03\"" pod="kube-system/kindnet-7gvt6"
	I0719 04:25:34.803192       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7gvt6" node="ha-925161-m03"
	E0719 04:25:34.803317       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j6526\": pod kube-proxy-j6526 is already assigned to node \"ha-925161-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j6526" node="ha-925161-m03"
	E0719 04:25:34.803378       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 20b69c28-de0f-4ed7-846c-848d9e938c46(kube-system/kube-proxy-j6526) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j6526"
	E0719 04:25:34.805910       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j6526\": pod kube-proxy-j6526 is already assigned to node \"ha-925161-m03\"" pod="kube-system/kube-proxy-j6526"
	I0719 04:25:34.806120       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j6526" node="ha-925161-m03"
	E0719 04:26:03.007466       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-h8rpn\": pod busybox-fc5497c4f-h8rpn is already assigned to node \"ha-925161-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-h8rpn" node="ha-925161-m02"
	E0719 04:26:03.007620       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-h8rpn\": pod busybox-fc5497c4f-h8rpn is already assigned to node \"ha-925161-m03\"" pod="default/busybox-fc5497c4f-h8rpn"
	E0719 04:27:29.902023       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-f4fgd\": pod kube-proxy-f4fgd is already assigned to node \"ha-925161-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-f4fgd" node="ha-925161-m04"
	E0719 04:27:29.902117       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-f4fgd\": pod kube-proxy-f4fgd is already assigned to node \"ha-925161-m04\"" pod="kube-system/kube-proxy-f4fgd"
	E0719 04:27:29.950616       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dnwxp\": pod kindnet-dnwxp is already assigned to node \"ha-925161-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dnwxp" node="ha-925161-m04"
	E0719 04:27:29.952590       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bb80bffc-8a33-4e45-9d7e-560526e289a7(kube-system/kindnet-dnwxp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dnwxp"
	E0719 04:27:29.952714       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dnwxp\": pod kindnet-dnwxp is already assigned to node \"ha-925161-m04\"" pod="kube-system/kindnet-dnwxp"
	I0719 04:27:29.952830       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dnwxp" node="ha-925161-m04"
	E0719 04:27:30.048921       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2cxws\": pod kindnet-2cxws is already assigned to node \"ha-925161-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2cxws" node="ha-925161-m04"
	E0719 04:27:30.051009       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bf5c4d4d-bf9a-42c4-8e17-ded79b29fbf0(kube-system/kindnet-2cxws) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2cxws"
	E0719 04:27:30.051082       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2cxws\": pod kindnet-2cxws is already assigned to node \"ha-925161-m04\"" pod="kube-system/kindnet-2cxws"
	I0719 04:27:30.051128       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2cxws" node="ha-925161-m04"
	
	
	==> kubelet <==
	Jul 19 04:26:09 ha-925161 kubelet[1377]: E0719 04:26:09.118302    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:26:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:26:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:26:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:26:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:27:09 ha-925161 kubelet[1377]: E0719 04:27:09.121109    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:27:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:27:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:27:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:27:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:28:09 ha-925161 kubelet[1377]: E0719 04:28:09.118663    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:28:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:28:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:28:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:28:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:29:09 ha-925161 kubelet[1377]: E0719 04:29:09.118773    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:29:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:29:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:29:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:29:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:30:09 ha-925161 kubelet[1377]: E0719 04:30:09.118645    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:30:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:30:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:30:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:30:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-925161 -n ha-925161
helpers_test.go:261: (dbg) Run:  kubectl --context ha-925161 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 3 (3.194800862s)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-925161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:30:35.493514  150337 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:30:35.493649  150337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:35.493660  150337 out.go:304] Setting ErrFile to fd 2...
	I0719 04:30:35.493666  150337 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:35.493844  150337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:30:35.494015  150337 out.go:298] Setting JSON to false
	I0719 04:30:35.494057  150337 mustload.go:65] Loading cluster: ha-925161
	I0719 04:30:35.494101  150337 notify.go:220] Checking for updates...
	I0719 04:30:35.494454  150337 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:30:35.494474  150337 status.go:255] checking status of ha-925161 ...
	I0719 04:30:35.494872  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:35.494932  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:35.514653  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I0719 04:30:35.515122  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:35.515704  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:35.515725  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:35.516168  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:35.516350  150337 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:30:35.517998  150337 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:30:35.518015  150337 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:35.518309  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:35.518373  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:35.532952  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38077
	I0719 04:30:35.533371  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:35.533851  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:35.533873  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:35.534140  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:35.534300  150337 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:30:35.536779  150337 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:35.537256  150337 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:35.537282  150337 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:35.537477  150337 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:35.537813  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:35.537851  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:35.552234  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45745
	I0719 04:30:35.552608  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:35.553451  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:35.553513  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:35.554419  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:35.555099  150337 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:30:35.555354  150337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:35.555391  150337 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:30:35.558868  150337 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:35.559332  150337 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:35.559360  150337 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:35.559522  150337 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:30:35.559690  150337 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:30:35.559833  150337 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:30:35.560015  150337 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:30:35.643633  150337 ssh_runner.go:195] Run: systemctl --version
	I0719 04:30:35.649007  150337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:35.663859  150337 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:35.663889  150337 api_server.go:166] Checking apiserver status ...
	I0719 04:30:35.663928  150337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:35.677886  150337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0719 04:30:35.687053  150337 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:35.687117  150337 ssh_runner.go:195] Run: ls
	I0719 04:30:35.692073  150337 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:35.696030  150337 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:35.696058  150337 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:30:35.696067  150337 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:35.696084  150337 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:30:35.696350  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:35.696390  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:35.712302  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40829
	I0719 04:30:35.712761  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:35.713366  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:35.713397  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:35.714119  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:35.715471  150337 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:30:35.717398  150337 status.go:330] ha-925161-m02 host status = "Running" (err=<nil>)
	I0719 04:30:35.717421  150337 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:35.717813  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:35.717862  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:35.732812  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I0719 04:30:35.733185  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:35.733666  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:35.733693  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:35.734023  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:35.734217  150337 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:30:35.736887  150337 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:35.737395  150337 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:35.737423  150337 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:35.737565  150337 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:35.737967  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:35.738010  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:35.752909  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42571
	I0719 04:30:35.753389  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:35.753809  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:35.753830  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:35.754257  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:35.754427  150337 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:30:35.754701  150337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:35.754719  150337 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:30:35.757455  150337 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:35.757903  150337 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:35.757927  150337 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:35.758070  150337 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:30:35.758236  150337 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:30:35.758378  150337 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:30:35.758509  150337 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	W0719 04:30:38.305413  150337 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0719 04:30:38.305521  150337 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0719 04:30:38.305543  150337 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:38.305558  150337 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 04:30:38.305584  150337 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:38.305611  150337 status.go:255] checking status of ha-925161-m03 ...
	I0719 04:30:38.305922  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:38.305958  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:38.320803  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0719 04:30:38.321398  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:38.321869  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:38.321891  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:38.322268  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:38.322458  150337 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:30:38.324029  150337 status.go:330] ha-925161-m03 host status = "Running" (err=<nil>)
	I0719 04:30:38.324053  150337 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:38.324352  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:38.324378  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:38.339488  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41295
	I0719 04:30:38.339836  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:38.340266  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:38.340287  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:38.340624  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:38.340799  150337 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:30:38.343403  150337 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:38.343817  150337 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:38.343842  150337 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:38.343959  150337 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:38.344275  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:38.344299  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:38.358546  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42855
	I0719 04:30:38.358896  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:38.359325  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:38.359348  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:38.359639  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:38.359815  150337 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:30:38.359998  150337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:38.360018  150337 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:30:38.362471  150337 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:38.362839  150337 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:38.362869  150337 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:38.362998  150337 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:30:38.363171  150337 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:30:38.363298  150337 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:30:38.363468  150337 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:30:38.440218  150337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:38.454291  150337 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:38.454333  150337 api_server.go:166] Checking apiserver status ...
	I0719 04:30:38.454365  150337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:38.468391  150337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	W0719 04:30:38.481223  150337 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:38.481282  150337 ssh_runner.go:195] Run: ls
	I0719 04:30:38.485377  150337 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:38.489568  150337 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:38.489589  150337 status.go:422] ha-925161-m03 apiserver status = Running (err=<nil>)
	I0719 04:30:38.489597  150337 status.go:257] ha-925161-m03 status: &{Name:ha-925161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:38.489613  150337 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:30:38.489935  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:38.489958  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:38.505279  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34633
	I0719 04:30:38.505706  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:38.506197  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:38.506220  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:38.506503  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:38.506685  150337 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:30:38.508193  150337 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:30:38.508209  150337 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:38.508555  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:38.508578  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:38.523538  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38053
	I0719 04:30:38.523898  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:38.524330  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:38.524355  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:38.524635  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:38.524822  150337 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:30:38.527514  150337 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:38.527933  150337 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:38.527966  150337 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:38.528077  150337 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:38.528357  150337 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:38.528391  150337 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:38.542427  150337 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38275
	I0719 04:30:38.542803  150337 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:38.543222  150337 main.go:141] libmachine: Using API Version  1
	I0719 04:30:38.543246  150337 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:38.543507  150337 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:38.543715  150337 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:30:38.543881  150337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:38.543898  150337 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:30:38.546503  150337 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:38.546901  150337 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:38.546928  150337 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:38.547050  150337 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:30:38.547205  150337 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:30:38.547326  150337 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:30:38.547470  150337 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:30:38.627766  150337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:38.641113  150337 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 3 (5.5486382s)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-925161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:30:39.288603  150436 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:30:39.288834  150436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:39.288842  150436 out.go:304] Setting ErrFile to fd 2...
	I0719 04:30:39.288846  150436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:39.289040  150436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:30:39.289228  150436 out.go:298] Setting JSON to false
	I0719 04:30:39.289255  150436 mustload.go:65] Loading cluster: ha-925161
	I0719 04:30:39.289313  150436 notify.go:220] Checking for updates...
	I0719 04:30:39.289624  150436 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:30:39.289638  150436 status.go:255] checking status of ha-925161 ...
	I0719 04:30:39.290017  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:39.290083  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:39.305783  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0719 04:30:39.306211  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:39.306772  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:39.306796  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:39.307185  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:39.307389  150436 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:30:39.308993  150436 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:30:39.309009  150436 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:39.309349  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:39.309394  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:39.324310  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I0719 04:30:39.324726  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:39.325283  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:39.325301  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:39.325635  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:39.325803  150436 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:30:39.328623  150436 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:39.329021  150436 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:39.329051  150436 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:39.329247  150436 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:39.329645  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:39.329715  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:39.344828  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36777
	I0719 04:30:39.345236  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:39.345659  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:39.345682  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:39.346001  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:39.346186  150436 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:30:39.346394  150436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:39.346417  150436 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:30:39.349153  150436 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:39.349587  150436 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:39.349612  150436 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:39.349763  150436 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:30:39.349944  150436 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:30:39.350126  150436 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:30:39.350281  150436 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:30:39.440726  150436 ssh_runner.go:195] Run: systemctl --version
	I0719 04:30:39.446424  150436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:39.464282  150436 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:39.464311  150436 api_server.go:166] Checking apiserver status ...
	I0719 04:30:39.464343  150436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:39.479608  150436 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0719 04:30:39.490891  150436 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:39.490949  150436 ssh_runner.go:195] Run: ls
	I0719 04:30:39.495261  150436 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:39.501095  150436 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:39.501120  150436 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:30:39.501135  150436 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:39.501163  150436 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:30:39.501530  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:39.501570  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:39.516457  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44685
	I0719 04:30:39.516844  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:39.517308  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:39.517331  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:39.517644  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:39.517846  150436 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:30:39.519498  150436 status.go:330] ha-925161-m02 host status = "Running" (err=<nil>)
	I0719 04:30:39.519516  150436 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:39.519781  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:39.519817  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:39.536346  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42929
	I0719 04:30:39.536766  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:39.537306  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:39.537342  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:39.537679  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:39.537908  150436 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:30:39.540977  150436 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:39.541667  150436 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:39.541696  150436 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:39.541730  150436 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:39.542330  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:39.542417  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:39.563627  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0719 04:30:39.564046  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:39.564577  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:39.564603  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:39.564898  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:39.565135  150436 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:30:39.565314  150436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:39.565340  150436 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:30:39.568408  150436 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:39.568904  150436 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:39.568929  150436 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:39.569077  150436 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:30:39.569288  150436 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:30:39.569483  150436 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:30:39.569701  150436 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	W0719 04:30:41.377399  150436 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:41.377459  150436 retry.go:31] will retry after 179.173406ms: dial tcp 192.168.39.102:22: connect: no route to host
	W0719 04:30:44.449333  150436 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0719 04:30:44.449482  150436 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0719 04:30:44.449512  150436 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:44.449527  150436 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 04:30:44.449559  150436 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:44.449569  150436 status.go:255] checking status of ha-925161-m03 ...
	I0719 04:30:44.449894  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:44.449950  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:44.465421  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37945
	I0719 04:30:44.465836  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:44.466381  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:44.466402  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:44.466760  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:44.466943  150436 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:30:44.468767  150436 status.go:330] ha-925161-m03 host status = "Running" (err=<nil>)
	I0719 04:30:44.468785  150436 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:44.469197  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:44.469239  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:44.486649  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37937
	I0719 04:30:44.487002  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:44.487434  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:44.487459  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:44.487746  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:44.487925  150436 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:30:44.490924  150436 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:44.491357  150436 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:44.491392  150436 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:44.491504  150436 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:44.491827  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:44.491869  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:44.507554  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0719 04:30:44.507987  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:44.508442  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:44.508464  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:44.508802  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:44.509022  150436 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:30:44.509252  150436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:44.509293  150436 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:30:44.512162  150436 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:44.512588  150436 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:44.512617  150436 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:44.512768  150436 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:30:44.512939  150436 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:30:44.513087  150436 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:30:44.513239  150436 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:30:44.588097  150436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:44.603739  150436 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:44.603772  150436 api_server.go:166] Checking apiserver status ...
	I0719 04:30:44.603808  150436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:44.617979  150436 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	W0719 04:30:44.627414  150436 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:44.627467  150436 ssh_runner.go:195] Run: ls
	I0719 04:30:44.632465  150436 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:44.636579  150436 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:44.636605  150436 status.go:422] ha-925161-m03 apiserver status = Running (err=<nil>)
	I0719 04:30:44.636616  150436 status.go:257] ha-925161-m03 status: &{Name:ha-925161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:44.636635  150436 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:30:44.637058  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:44.637132  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:44.652622  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39647
	I0719 04:30:44.653091  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:44.653578  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:44.653596  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:44.653937  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:44.654247  150436 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:30:44.655802  150436 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:30:44.655818  150436 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:44.656072  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:44.656104  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:44.671320  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I0719 04:30:44.671750  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:44.672207  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:44.672226  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:44.672520  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:44.672688  150436 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:30:44.675410  150436 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:44.675795  150436 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:44.675815  150436 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:44.675948  150436 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:44.676328  150436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:44.676371  150436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:44.692039  150436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45471
	I0719 04:30:44.692414  150436 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:44.692842  150436 main.go:141] libmachine: Using API Version  1
	I0719 04:30:44.692863  150436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:44.693203  150436 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:44.693391  150436 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:30:44.693586  150436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:44.693606  150436 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:30:44.695982  150436 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:44.696464  150436 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:44.696492  150436 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:44.696632  150436 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:30:44.696789  150436 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:30:44.696962  150436 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:30:44.697120  150436 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:30:44.779777  150436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:44.794494  150436 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 3 (4.755026557s)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-925161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:30:46.576401  150537 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:30:46.576513  150537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:46.576523  150537 out.go:304] Setting ErrFile to fd 2...
	I0719 04:30:46.576527  150537 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:46.576674  150537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:30:46.576817  150537 out.go:298] Setting JSON to false
	I0719 04:30:46.576844  150537 mustload.go:65] Loading cluster: ha-925161
	I0719 04:30:46.576879  150537 notify.go:220] Checking for updates...
	I0719 04:30:46.577239  150537 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:30:46.577259  150537 status.go:255] checking status of ha-925161 ...
	I0719 04:30:46.577689  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:46.577764  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:46.597316  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
	I0719 04:30:46.597872  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:46.598597  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:46.598636  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:46.598959  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:46.599118  150537 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:30:46.600912  150537 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:30:46.600927  150537 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:46.601245  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:46.601288  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:46.616364  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39555
	I0719 04:30:46.616808  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:46.617318  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:46.617351  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:46.617671  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:46.617873  150537 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:30:46.620688  150537 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:46.621231  150537 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:46.621256  150537 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:46.621413  150537 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:46.621838  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:46.621931  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:46.638381  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37551
	I0719 04:30:46.638782  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:46.639252  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:46.639283  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:46.639597  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:46.639767  150537 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:30:46.639976  150537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:46.640005  150537 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:30:46.642535  150537 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:46.642921  150537 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:46.642950  150537 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:46.643073  150537 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:30:46.643205  150537 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:30:46.643302  150537 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:30:46.643407  150537 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:30:46.725493  150537 ssh_runner.go:195] Run: systemctl --version
	I0719 04:30:46.732305  150537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:46.746800  150537 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:46.746833  150537 api_server.go:166] Checking apiserver status ...
	I0719 04:30:46.746877  150537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:46.762191  150537 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0719 04:30:46.772294  150537 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:46.772341  150537 ssh_runner.go:195] Run: ls
	I0719 04:30:46.776147  150537 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:46.782774  150537 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:46.782798  150537 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:30:46.782807  150537 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:46.782824  150537 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:30:46.783127  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:46.783165  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:46.799822  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0719 04:30:46.800281  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:46.800831  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:46.800864  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:46.801283  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:46.801517  150537 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:30:46.803257  150537 status.go:330] ha-925161-m02 host status = "Running" (err=<nil>)
	I0719 04:30:46.803285  150537 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:46.803580  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:46.803628  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:46.818874  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43331
	I0719 04:30:46.819343  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:46.819811  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:46.819834  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:46.820181  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:46.820350  150537 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:30:46.823455  150537 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:46.823917  150537 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:46.823948  150537 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:46.824090  150537 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:46.824474  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:46.824600  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:46.840782  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I0719 04:30:46.841209  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:46.841631  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:46.841651  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:46.842125  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:46.842319  150537 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:30:46.842538  150537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:46.842560  150537 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:30:46.845508  150537 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:46.845912  150537 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:46.845973  150537 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:46.846084  150537 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:30:46.846268  150537 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:30:46.846428  150537 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:30:46.846580  150537 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	W0719 04:30:47.521315  150537 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:47.521368  150537 retry.go:31] will retry after 360.088932ms: dial tcp 192.168.39.102:22: connect: no route to host
	W0719 04:30:50.945342  150537 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0719 04:30:50.945438  150537 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0719 04:30:50.945467  150537 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:50.945474  150537 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 04:30:50.945496  150537 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:50.945506  150537 status.go:255] checking status of ha-925161-m03 ...
	I0719 04:30:50.945808  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:50.945839  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:50.961651  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46843
	I0719 04:30:50.962190  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:50.962684  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:50.962717  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:50.963016  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:50.963209  150537 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:30:50.964776  150537 status.go:330] ha-925161-m03 host status = "Running" (err=<nil>)
	I0719 04:30:50.964793  150537 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:50.965120  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:50.965151  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:50.981382  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0719 04:30:50.981822  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:50.982338  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:50.982362  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:50.982740  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:50.982947  150537 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:30:50.985505  150537 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:50.985922  150537 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:50.985949  150537 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:50.986034  150537 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:50.986334  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:50.986372  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:51.001346  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0719 04:30:51.001844  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:51.002372  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:51.002397  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:51.002715  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:51.002900  150537 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:30:51.003111  150537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:51.003133  150537 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:30:51.006000  150537 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:51.006551  150537 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:51.006585  150537 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:51.006715  150537 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:30:51.006878  150537 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:30:51.007038  150537 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:30:51.007211  150537 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:30:51.088436  150537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:51.103756  150537 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:51.103784  150537 api_server.go:166] Checking apiserver status ...
	I0719 04:30:51.103824  150537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:51.116387  150537 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	W0719 04:30:51.124880  150537 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:51.124921  150537 ssh_runner.go:195] Run: ls
	I0719 04:30:51.128724  150537 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:51.132825  150537 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:51.132847  150537 status.go:422] ha-925161-m03 apiserver status = Running (err=<nil>)
	I0719 04:30:51.132855  150537 status.go:257] ha-925161-m03 status: &{Name:ha-925161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:51.132869  150537 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:30:51.133173  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:51.133217  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:51.148559  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37879
	I0719 04:30:51.148999  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:51.149518  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:51.149537  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:51.149840  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:51.150047  150537 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:30:51.151532  150537 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:30:51.151549  150537 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:51.151905  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:51.151935  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:51.166888  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32853
	I0719 04:30:51.167324  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:51.167789  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:51.167811  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:51.168079  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:51.168293  150537 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:30:51.171067  150537 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:51.171512  150537 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:51.171546  150537 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:51.171679  150537 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:51.172076  150537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:51.172116  150537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:51.187719  150537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42275
	I0719 04:30:51.188137  150537 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:51.188602  150537 main.go:141] libmachine: Using API Version  1
	I0719 04:30:51.188623  150537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:51.188942  150537 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:51.189140  150537 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:30:51.189308  150537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:51.189335  150537 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:30:51.191793  150537 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:51.192255  150537 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:51.192278  150537 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:51.192443  150537 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:30:51.192614  150537 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:30:51.192776  150537 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:30:51.192927  150537 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:30:51.275574  150537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:51.288471  150537 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 3 (3.723986347s)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-925161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:30:54.330788  150653 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:30:54.331308  150653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:54.331320  150653 out.go:304] Setting ErrFile to fd 2...
	I0719 04:30:54.331324  150653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:30:54.331592  150653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:30:54.331779  150653 out.go:298] Setting JSON to false
	I0719 04:30:54.331810  150653 mustload.go:65] Loading cluster: ha-925161
	I0719 04:30:54.331855  150653 notify.go:220] Checking for updates...
	I0719 04:30:54.332195  150653 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:30:54.332209  150653 status.go:255] checking status of ha-925161 ...
	I0719 04:30:54.332611  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:54.332663  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:54.351785  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45419
	I0719 04:30:54.352192  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:54.352727  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:54.352748  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:54.353112  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:54.353335  150653 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:30:54.354776  150653 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:30:54.354798  150653 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:54.355109  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:54.355156  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:54.370672  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44387
	I0719 04:30:54.371086  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:54.371671  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:54.371694  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:54.372010  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:54.372201  150653 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:30:54.374855  150653 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:54.375312  150653 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:54.375340  150653 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:54.375475  150653 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:30:54.375804  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:54.375836  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:54.391422  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0719 04:30:54.391907  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:54.392349  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:54.392372  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:54.392615  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:54.392772  150653 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:30:54.392980  150653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:54.393001  150653 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:30:54.395823  150653 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:54.396309  150653 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:30:54.396354  150653 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:30:54.396571  150653 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:30:54.396765  150653 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:30:54.396907  150653 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:30:54.397072  150653 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:30:54.480969  150653 ssh_runner.go:195] Run: systemctl --version
	I0719 04:30:54.489678  150653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:54.507871  150653 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:54.507900  150653 api_server.go:166] Checking apiserver status ...
	I0719 04:30:54.507930  150653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:54.526081  150653 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0719 04:30:54.535458  150653 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:54.535508  150653 ssh_runner.go:195] Run: ls
	I0719 04:30:54.539542  150653 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:54.544109  150653 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:54.544149  150653 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:30:54.544171  150653 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:54.544198  150653 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:30:54.544684  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:54.544726  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:54.559805  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0719 04:30:54.560255  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:54.560705  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:54.560734  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:54.561011  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:54.561206  150653 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:30:54.562768  150653 status.go:330] ha-925161-m02 host status = "Running" (err=<nil>)
	I0719 04:30:54.562785  150653 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:54.563046  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:54.563078  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:54.578073  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39337
	I0719 04:30:54.578532  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:54.578975  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:54.579006  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:54.579322  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:54.579550  150653 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:30:54.582387  150653 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:54.582768  150653 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:54.582801  150653 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:54.582949  150653 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:30:54.583376  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:54.583429  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:54.598946  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I0719 04:30:54.599415  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:54.599799  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:54.599824  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:54.600138  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:54.600338  150653 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:30:54.600761  150653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:54.600793  150653 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:30:54.604813  150653 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:54.605189  150653 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:30:54.605217  150653 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:30:54.605372  150653 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:30:54.605552  150653 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:30:54.605696  150653 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:30:54.605822  150653 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	W0719 04:30:57.665303  150653 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0719 04:30:57.665399  150653 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0719 04:30:57.665457  150653 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:57.665465  150653 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 04:30:57.665483  150653 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:30:57.665490  150653 status.go:255] checking status of ha-925161-m03 ...
	I0719 04:30:57.665795  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:57.665827  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:57.680608  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0719 04:30:57.681046  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:57.681607  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:57.681633  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:57.681968  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:57.682150  150653 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:30:57.683808  150653 status.go:330] ha-925161-m03 host status = "Running" (err=<nil>)
	I0719 04:30:57.683825  150653 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:57.684123  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:57.684162  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:57.698599  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
	I0719 04:30:57.698995  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:57.699436  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:57.699456  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:57.699768  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:57.699949  150653 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:30:57.702384  150653 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:57.702787  150653 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:57.702801  150653 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:57.702961  150653 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:30:57.703303  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:57.703331  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:57.718796  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34931
	I0719 04:30:57.719251  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:57.719714  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:57.719734  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:57.720048  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:57.720278  150653 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:30:57.720480  150653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:57.720504  150653 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:30:57.723280  150653 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:57.723654  150653 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:30:57.723681  150653 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:30:57.723828  150653 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:30:57.723975  150653 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:30:57.724131  150653 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:30:57.724262  150653 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:30:57.800005  150653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:57.818395  150653 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:30:57.818423  150653 api_server.go:166] Checking apiserver status ...
	I0719 04:30:57.818455  150653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:30:57.832703  150653 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	W0719 04:30:57.842048  150653 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:30:57.842097  150653 ssh_runner.go:195] Run: ls
	I0719 04:30:57.846088  150653 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:30:57.850500  150653 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:30:57.850523  150653 status.go:422] ha-925161-m03 apiserver status = Running (err=<nil>)
	I0719 04:30:57.850532  150653 status.go:257] ha-925161-m03 status: &{Name:ha-925161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:30:57.850547  150653 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:30:57.850824  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:57.850848  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:57.866013  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I0719 04:30:57.866541  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:57.866976  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:57.866998  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:57.867321  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:57.867512  150653 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:30:57.869175  150653 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:30:57.869193  150653 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:57.869479  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:57.869503  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:57.884407  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35515
	I0719 04:30:57.884777  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:57.885237  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:57.885256  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:57.885559  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:57.885751  150653 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:30:57.888420  150653 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:57.888828  150653 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:57.888860  150653 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:57.888988  150653 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:30:57.889315  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:30:57.889358  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:30:57.903891  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40313
	I0719 04:30:57.904314  150653 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:30:57.904770  150653 main.go:141] libmachine: Using API Version  1
	I0719 04:30:57.904790  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:30:57.905138  150653 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:30:57.905337  150653 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:30:57.905516  150653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:30:57.905539  150653 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:30:57.908254  150653 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:57.908645  150653 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:30:57.908669  150653 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:30:57.908839  150653 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:30:57.908998  150653 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:30:57.909171  150653 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:30:57.909334  150653 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:30:57.995869  150653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:30:58.010132  150653 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 3 (3.726779888s)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-925161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:31:01.733079  150761 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:31:01.733200  150761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:01.733209  150761 out.go:304] Setting ErrFile to fd 2...
	I0719 04:31:01.733213  150761 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:01.733406  150761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:31:01.733592  150761 out.go:298] Setting JSON to false
	I0719 04:31:01.733634  150761 mustload.go:65] Loading cluster: ha-925161
	I0719 04:31:01.733734  150761 notify.go:220] Checking for updates...
	I0719 04:31:01.734794  150761 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:01.734830  150761 status.go:255] checking status of ha-925161 ...
	I0719 04:31:01.735619  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:01.735661  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:01.751096  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43513
	I0719 04:31:01.751505  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:01.752046  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:01.752078  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:01.752410  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:01.752590  150761 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:31:01.754207  150761 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:31:01.754222  150761 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:31:01.754483  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:01.754521  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:01.769256  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40333
	I0719 04:31:01.769696  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:01.770178  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:01.770201  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:01.770551  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:01.770788  150761 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:31:01.773868  150761 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:01.774349  150761 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:31:01.774377  150761 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:01.774552  150761 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:31:01.774855  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:01.774900  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:01.790274  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40299
	I0719 04:31:01.790724  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:01.791196  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:01.791214  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:01.791560  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:01.791769  150761 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:31:01.792058  150761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:01.792099  150761 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:31:01.795006  150761 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:01.795512  150761 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:31:01.795545  150761 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:01.795640  150761 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:31:01.795830  150761 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:31:01.795993  150761 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:31:01.796151  150761 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:31:01.880312  150761 ssh_runner.go:195] Run: systemctl --version
	I0719 04:31:01.886609  150761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:01.904819  150761 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:31:01.904847  150761 api_server.go:166] Checking apiserver status ...
	I0719 04:31:01.904878  150761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:31:01.919592  150761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0719 04:31:01.932432  150761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:31:01.932492  150761 ssh_runner.go:195] Run: ls
	I0719 04:31:01.938274  150761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:31:01.945753  150761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:31:01.945777  150761 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:31:01.945787  150761 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:01.945805  150761 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:31:01.946117  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:01.946155  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:01.961879  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46447
	I0719 04:31:01.962308  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:01.962896  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:01.962928  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:01.963295  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:01.963500  150761 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:31:01.965183  150761 status.go:330] ha-925161-m02 host status = "Running" (err=<nil>)
	I0719 04:31:01.965201  150761 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:31:01.965507  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:01.965540  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:01.980288  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40831
	I0719 04:31:01.980774  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:01.981260  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:01.981283  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:01.981617  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:01.981802  150761 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:31:01.984293  150761 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:31:01.984732  150761 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:31:01.984758  150761 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:31:01.984923  150761 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:31:01.985291  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:01.985333  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:02.001683  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40253
	I0719 04:31:02.002119  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:02.002671  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:02.002697  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:02.003086  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:02.003356  150761 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:31:02.003575  150761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:02.003604  150761 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:31:02.006700  150761 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:31:02.007150  150761 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:31:02.007174  150761 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:31:02.007283  150761 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:31:02.007442  150761 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:31:02.007575  150761 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:31:02.007726  150761 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	W0719 04:31:05.061341  150761 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.102:22: connect: no route to host
	W0719 04:31:05.061447  150761 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	E0719 04:31:05.061465  150761 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:31:05.061472  150761 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0719 04:31:05.061490  150761 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.102:22: connect: no route to host
	I0719 04:31:05.061501  150761 status.go:255] checking status of ha-925161-m03 ...
	I0719 04:31:05.061805  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:05.061845  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:05.076736  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36183
	I0719 04:31:05.077227  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:05.077720  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:05.077752  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:05.078060  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:05.078283  150761 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:31:05.080325  150761 status.go:330] ha-925161-m03 host status = "Running" (err=<nil>)
	I0719 04:31:05.080343  150761 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:31:05.080625  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:05.080661  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:05.095302  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I0719 04:31:05.095837  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:05.096374  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:05.096394  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:05.096691  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:05.096871  150761 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:31:05.099837  150761 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:05.100308  150761 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:31:05.100330  150761 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:05.100478  150761 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:31:05.100766  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:05.100811  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:05.117786  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43479
	I0719 04:31:05.118373  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:05.119080  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:05.119115  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:05.119489  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:05.119729  150761 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:31:05.119928  150761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:05.119956  150761 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:31:05.123254  150761 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:05.123796  150761 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:31:05.123827  150761 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:05.123999  150761 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:31:05.124174  150761 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:31:05.124324  150761 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:31:05.124483  150761 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:31:05.208092  150761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:05.222484  150761 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:31:05.222514  150761 api_server.go:166] Checking apiserver status ...
	I0719 04:31:05.222544  150761 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:31:05.235247  150761 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	W0719 04:31:05.244284  150761 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:31:05.244335  150761 ssh_runner.go:195] Run: ls
	I0719 04:31:05.248519  150761 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:31:05.252649  150761 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:31:05.252673  150761 status.go:422] ha-925161-m03 apiserver status = Running (err=<nil>)
	I0719 04:31:05.252681  150761 status.go:257] ha-925161-m03 status: &{Name:ha-925161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:05.252697  150761 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:31:05.252973  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:05.253006  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:05.268037  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42573
	I0719 04:31:05.268466  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:05.269023  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:05.269050  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:05.269459  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:05.269682  150761 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:31:05.271401  150761 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:31:05.271420  150761 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:31:05.271803  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:05.271841  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:05.286426  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44305
	I0719 04:31:05.286836  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:05.287303  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:05.287324  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:05.287696  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:05.287885  150761 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:31:05.290759  150761 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:05.291144  150761 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:31:05.291179  150761 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:05.291300  150761 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:31:05.291592  150761 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:05.291629  150761 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:05.306893  150761 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0719 04:31:05.307330  150761 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:05.307833  150761 main.go:141] libmachine: Using API Version  1
	I0719 04:31:05.307854  150761 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:05.308194  150761 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:05.308392  150761 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:31:05.308582  150761 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:05.308604  150761 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:31:05.311338  150761 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:05.311730  150761 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:31:05.311762  150761 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:05.311900  150761 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:31:05.312065  150761 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:31:05.312211  150761 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:31:05.312393  150761 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:31:05.396354  150761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:05.413718  150761 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 7 (639.136205ms)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-925161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:31:12.781874  150902 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:31:12.782143  150902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:12.782153  150902 out.go:304] Setting ErrFile to fd 2...
	I0719 04:31:12.782159  150902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:12.782371  150902 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:31:12.782575  150902 out.go:298] Setting JSON to false
	I0719 04:31:12.782615  150902 mustload.go:65] Loading cluster: ha-925161
	I0719 04:31:12.782732  150902 notify.go:220] Checking for updates...
	I0719 04:31:12.783073  150902 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:12.783095  150902 status.go:255] checking status of ha-925161 ...
	I0719 04:31:12.783577  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:12.783656  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:12.801820  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I0719 04:31:12.802285  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:12.802832  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:12.802857  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:12.803299  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:12.803543  150902 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:31:12.805612  150902 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:31:12.805633  150902 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:31:12.806117  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:12.806174  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:12.822097  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44367
	I0719 04:31:12.822582  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:12.823172  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:12.823194  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:12.823659  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:12.823883  150902 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:31:12.827300  150902 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:12.827762  150902 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:31:12.827792  150902 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:12.827965  150902 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:31:12.828296  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:12.828356  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:12.844406  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0719 04:31:12.844842  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:12.845417  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:12.845443  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:12.845867  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:12.846069  150902 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:31:12.846319  150902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:12.846343  150902 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:31:12.849718  150902 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:12.850199  150902 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:31:12.850283  150902 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:12.850480  150902 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:31:12.851077  150902 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:31:12.851713  150902 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:31:12.852053  150902 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:31:12.936484  150902 ssh_runner.go:195] Run: systemctl --version
	I0719 04:31:12.944011  150902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:12.960084  150902 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:31:12.960113  150902 api_server.go:166] Checking apiserver status ...
	I0719 04:31:12.960143  150902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:31:12.976024  150902 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0719 04:31:12.985199  150902 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:31:12.985258  150902 ssh_runner.go:195] Run: ls
	I0719 04:31:12.989591  150902 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:31:12.995218  150902 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:31:12.995250  150902 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:31:12.995266  150902 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:12.995293  150902 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:31:12.995595  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:12.995632  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:13.010968  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34175
	I0719 04:31:13.011379  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:13.011851  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:13.011871  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:13.012303  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:13.012486  150902 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:31:13.014259  150902 status.go:330] ha-925161-m02 host status = "Stopped" (err=<nil>)
	I0719 04:31:13.014277  150902 status.go:343] host is not running, skipping remaining checks
	I0719 04:31:13.014285  150902 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:13.014309  150902 status.go:255] checking status of ha-925161-m03 ...
	I0719 04:31:13.014593  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:13.014634  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:13.030187  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0719 04:31:13.030599  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:13.031034  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:13.031054  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:13.031454  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:13.031698  150902 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:31:13.033465  150902 status.go:330] ha-925161-m03 host status = "Running" (err=<nil>)
	I0719 04:31:13.033486  150902 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:31:13.033782  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:13.033826  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:13.048952  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36967
	I0719 04:31:13.049451  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:13.049965  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:13.049990  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:13.050337  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:13.050533  150902 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:31:13.053442  150902 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:13.053892  150902 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:31:13.053921  150902 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:13.054046  150902 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:31:13.054327  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:13.054371  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:13.070623  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0719 04:31:13.071025  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:13.071493  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:13.071518  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:13.071791  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:13.072014  150902 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:31:13.072243  150902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:13.072266  150902 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:31:13.075323  150902 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:13.075880  150902 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:31:13.075906  150902 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:13.076030  150902 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:31:13.076208  150902 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:31:13.076371  150902 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:31:13.076509  150902 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:31:13.159907  150902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:13.175959  150902 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:31:13.175988  150902 api_server.go:166] Checking apiserver status ...
	I0719 04:31:13.176020  150902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:31:13.198462  150902 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	W0719 04:31:13.208779  150902 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:31:13.208828  150902 ssh_runner.go:195] Run: ls
	I0719 04:31:13.214393  150902 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:31:13.218799  150902 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:31:13.218822  150902 status.go:422] ha-925161-m03 apiserver status = Running (err=<nil>)
	I0719 04:31:13.218830  150902 status.go:257] ha-925161-m03 status: &{Name:ha-925161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:13.218849  150902 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:31:13.219139  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:13.219180  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:13.234503  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I0719 04:31:13.234934  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:13.235423  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:13.235445  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:13.235753  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:13.235945  150902 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:31:13.237518  150902 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:31:13.237534  150902 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:31:13.237834  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:13.237872  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:13.253036  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I0719 04:31:13.253506  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:13.253951  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:13.253974  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:13.254294  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:13.254452  150902 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:31:13.257231  150902 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:13.257654  150902 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:31:13.257685  150902 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:13.257830  150902 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:31:13.258130  150902 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:13.258169  150902 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:13.272890  150902 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44377
	I0719 04:31:13.273302  150902 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:13.273754  150902 main.go:141] libmachine: Using API Version  1
	I0719 04:31:13.273778  150902 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:13.274135  150902 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:13.274338  150902 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:31:13.274518  150902 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:13.274538  150902 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:31:13.277309  150902 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:13.277678  150902 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:31:13.277700  150902 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:13.277868  150902 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:31:13.278046  150902 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:31:13.278199  150902 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:31:13.278367  150902 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:31:13.363733  150902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:13.377019  150902 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 7 (612.227232ms)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-925161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:31:18.433955  151001 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:31:18.434073  151001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:18.434084  151001 out.go:304] Setting ErrFile to fd 2...
	I0719 04:31:18.434090  151001 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:18.434291  151001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:31:18.434472  151001 out.go:298] Setting JSON to false
	I0719 04:31:18.434509  151001 mustload.go:65] Loading cluster: ha-925161
	I0719 04:31:18.434623  151001 notify.go:220] Checking for updates...
	I0719 04:31:18.434918  151001 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:18.434936  151001 status.go:255] checking status of ha-925161 ...
	I0719 04:31:18.435336  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.435408  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.450829  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
	I0719 04:31:18.451329  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.451936  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.451966  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.452348  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.452566  151001 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:31:18.454182  151001 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:31:18.454199  151001 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:31:18.454476  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.454510  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.469681  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37375
	I0719 04:31:18.470142  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.470627  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.470652  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.471027  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.471225  151001 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:31:18.474327  151001 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:18.474747  151001 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:31:18.474774  151001 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:18.474940  151001 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:31:18.475264  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.475311  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.491544  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36375
	I0719 04:31:18.491934  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.492516  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.492539  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.492878  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.493131  151001 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:31:18.493349  151001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:18.493396  151001 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:31:18.496270  151001 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:18.496657  151001 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:31:18.496682  151001 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:18.496783  151001 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:31:18.496948  151001 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:31:18.497294  151001 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:31:18.497495  151001 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:31:18.585718  151001 ssh_runner.go:195] Run: systemctl --version
	I0719 04:31:18.591673  151001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:18.606243  151001 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:31:18.606273  151001 api_server.go:166] Checking apiserver status ...
	I0719 04:31:18.606311  151001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:31:18.619894  151001 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0719 04:31:18.631045  151001 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:31:18.631102  151001 ssh_runner.go:195] Run: ls
	I0719 04:31:18.635134  151001 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:31:18.640382  151001 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:31:18.640405  151001 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:31:18.640415  151001 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:18.640430  151001 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:31:18.640699  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.640733  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.655668  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0719 04:31:18.656096  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.656639  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.656660  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.656983  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.657225  151001 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:31:18.658856  151001 status.go:330] ha-925161-m02 host status = "Stopped" (err=<nil>)
	I0719 04:31:18.658872  151001 status.go:343] host is not running, skipping remaining checks
	I0719 04:31:18.658881  151001 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:18.658905  151001 status.go:255] checking status of ha-925161-m03 ...
	I0719 04:31:18.659228  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.659268  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.674287  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39039
	I0719 04:31:18.674664  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.675166  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.675191  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.675511  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.675712  151001 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:31:18.677354  151001 status.go:330] ha-925161-m03 host status = "Running" (err=<nil>)
	I0719 04:31:18.677370  151001 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:31:18.677649  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.677683  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.693854  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33159
	I0719 04:31:18.694305  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.694754  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.694775  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.695189  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.695387  151001 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:31:18.698265  151001 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:18.698803  151001 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:31:18.698831  151001 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:18.698988  151001 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:31:18.699297  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.699341  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.714773  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33625
	I0719 04:31:18.715244  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.715784  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.715811  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.716194  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.716410  151001 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:31:18.716643  151001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:18.716674  151001 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:31:18.719528  151001 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:18.720035  151001 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:31:18.720068  151001 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:18.720224  151001 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:31:18.720408  151001 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:31:18.720563  151001 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:31:18.720729  151001 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:31:18.800107  151001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:18.815161  151001 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:31:18.815201  151001 api_server.go:166] Checking apiserver status ...
	I0719 04:31:18.815245  151001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:31:18.828330  151001 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	W0719 04:31:18.837210  151001 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:31:18.837261  151001 ssh_runner.go:195] Run: ls
	I0719 04:31:18.841291  151001 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:31:18.845427  151001 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:31:18.845454  151001 status.go:422] ha-925161-m03 apiserver status = Running (err=<nil>)
	I0719 04:31:18.845465  151001 status.go:257] ha-925161-m03 status: &{Name:ha-925161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:18.845490  151001 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:31:18.845798  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.845846  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.861609  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35351
	I0719 04:31:18.862049  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.862544  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.862565  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.862854  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.863069  151001 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:31:18.864677  151001 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:31:18.864692  151001 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:31:18.864958  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.864996  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.879829  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0719 04:31:18.880236  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.880705  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.880729  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.881074  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.881263  151001 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:31:18.883685  151001 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:18.884010  151001 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:31:18.884039  151001 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:18.884185  151001 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:31:18.884574  151001 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:18.884637  151001 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:18.899492  151001 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0719 04:31:18.899863  151001 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:18.900352  151001 main.go:141] libmachine: Using API Version  1
	I0719 04:31:18.900373  151001 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:18.900675  151001 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:18.900864  151001 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:31:18.901023  151001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:18.901047  151001 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:31:18.903627  151001 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:18.904034  151001 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:31:18.904061  151001 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:18.904359  151001 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:31:18.904535  151001 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:31:18.904703  151001 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:31:18.904830  151001 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:31:18.987680  151001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:19.000958  151001 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 7 (646.551446ms)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-925161-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:31:30.588185  151106 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:31:30.588452  151106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:30.588467  151106 out.go:304] Setting ErrFile to fd 2...
	I0719 04:31:30.588473  151106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:30.588651  151106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:31:30.588841  151106 out.go:298] Setting JSON to false
	I0719 04:31:30.588882  151106 mustload.go:65] Loading cluster: ha-925161
	I0719 04:31:30.588998  151106 notify.go:220] Checking for updates...
	I0719 04:31:30.589324  151106 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:30.589342  151106 status.go:255] checking status of ha-925161 ...
	I0719 04:31:30.589689  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:30.589743  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:30.605500  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42063
	I0719 04:31:30.605900  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:30.606499  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:30.606528  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:30.606927  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:30.607182  151106 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:31:30.608978  151106 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:31:30.608997  151106 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:31:30.609441  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:30.609490  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:30.624392  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I0719 04:31:30.624830  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:30.625388  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:30.625412  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:30.625736  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:30.625932  151106 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:31:30.628841  151106 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:30.629263  151106 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:31:30.629295  151106 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:30.629423  151106 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:31:30.629848  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:30.629905  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:30.644984  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I0719 04:31:30.645473  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:30.645955  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:30.645977  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:30.646293  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:30.646506  151106 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:31:30.646697  151106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:30.646734  151106 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:31:30.650687  151106 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:30.650729  151106 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:31:30.650793  151106 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:31:30.650959  151106 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:31:30.651130  151106 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:31:30.651284  151106 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:31:30.651450  151106 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:31:30.739581  151106 ssh_runner.go:195] Run: systemctl --version
	I0719 04:31:30.747743  151106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:30.767367  151106 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:31:30.767396  151106 api_server.go:166] Checking apiserver status ...
	I0719 04:31:30.767439  151106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:31:30.781202  151106 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup
	W0719 04:31:30.794466  151106 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1190/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:31:30.794511  151106 ssh_runner.go:195] Run: ls
	I0719 04:31:30.799714  151106 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:31:30.804047  151106 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:31:30.804069  151106 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:31:30.804078  151106 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:30.804098  151106 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:31:30.804411  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:30.804446  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:30.819753  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0719 04:31:30.820158  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:30.820709  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:30.820733  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:30.821096  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:30.821308  151106 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:31:30.822856  151106 status.go:330] ha-925161-m02 host status = "Stopped" (err=<nil>)
	I0719 04:31:30.822871  151106 status.go:343] host is not running, skipping remaining checks
	I0719 04:31:30.822879  151106 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:30.822903  151106 status.go:255] checking status of ha-925161-m03 ...
	I0719 04:31:30.823195  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:30.823236  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:30.839744  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I0719 04:31:30.840160  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:30.840615  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:30.840641  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:30.840937  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:30.841113  151106 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:31:30.842662  151106 status.go:330] ha-925161-m03 host status = "Running" (err=<nil>)
	I0719 04:31:30.842680  151106 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:31:30.842950  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:30.842983  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:30.859712  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I0719 04:31:30.860176  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:30.860731  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:30.860758  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:30.861124  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:30.861369  151106 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:31:30.864620  151106 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:30.865013  151106 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:31:30.865039  151106 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:30.865238  151106 host.go:66] Checking if "ha-925161-m03" exists ...
	I0719 04:31:30.865608  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:30.865655  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:30.880740  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0719 04:31:30.881164  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:30.881662  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:30.881686  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:30.881986  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:30.882178  151106 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:31:30.882342  151106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:30.882363  151106 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:31:30.885043  151106 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:30.885553  151106 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:31:30.885582  151106 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:30.885728  151106 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:31:30.885879  151106 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:31:30.886052  151106 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:31:30.886169  151106 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:31:30.969553  151106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:30.984601  151106 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:31:30.984631  151106 api_server.go:166] Checking apiserver status ...
	I0719 04:31:30.984666  151106 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:31:31.000411  151106 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup
	W0719 04:31:31.014380  151106 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1532/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:31:31.014434  151106 ssh_runner.go:195] Run: ls
	I0719 04:31:31.019797  151106 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:31:31.025503  151106 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:31:31.025534  151106 status.go:422] ha-925161-m03 apiserver status = Running (err=<nil>)
	I0719 04:31:31.025545  151106 status.go:257] ha-925161-m03 status: &{Name:ha-925161-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:31:31.025567  151106 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:31:31.025899  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:31.025939  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:31.043478  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43181
	I0719 04:31:31.043881  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:31.044404  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:31.044434  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:31.044760  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:31.045089  151106 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:31:31.046697  151106 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:31:31.046717  151106 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:31:31.047002  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:31.047059  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:31.061907  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I0719 04:31:31.062391  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:31.062913  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:31.062939  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:31.063285  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:31.063462  151106 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:31:31.066479  151106 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:31.066872  151106 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:31:31.066903  151106 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:31.067045  151106 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:31:31.067341  151106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:31.067383  151106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:31.082397  151106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
	I0719 04:31:31.082850  151106 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:31.083298  151106 main.go:141] libmachine: Using API Version  1
	I0719 04:31:31.083318  151106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:31.083635  151106 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:31.083857  151106 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:31:31.084073  151106 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:31:31.084095  151106 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:31:31.086999  151106 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:31.087374  151106 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:31:31.087395  151106 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:31.087514  151106 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:31:31.087693  151106 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:31:31.087835  151106 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:31:31.087987  151106 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:31:31.175606  151106 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:31:31.188375  151106 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-925161 -n ha-925161
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-925161 logs -n 25: (1.34319414s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161:/home/docker/cp-test_ha-925161-m03_ha-925161.txt                       |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161 sudo cat                                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161.txt                                 |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m02:/home/docker/cp-test_ha-925161-m03_ha-925161-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m02 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04:/home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m04 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp testdata/cp-test.txt                                                | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3159028946/001/cp-test_ha-925161-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161:/home/docker/cp-test_ha-925161-m04_ha-925161.txt                       |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161 sudo cat                                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161.txt                                 |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m02:/home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m02 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03:/home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m03 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-925161 node stop m02 -v=7                                                     | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-925161 node start m02 -v=7                                                    | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:22:29
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:22:29.779814  145142 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:22:29.780075  145142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:22:29.780085  145142 out.go:304] Setting ErrFile to fd 2...
	I0719 04:22:29.780090  145142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:22:29.780324  145142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:22:29.780936  145142 out.go:298] Setting JSON to false
	I0719 04:22:29.781879  145142 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7493,"bootTime":1721355457,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 04:22:29.781936  145142 start.go:139] virtualization: kvm guest
	I0719 04:22:29.784151  145142 out.go:177] * [ha-925161] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 04:22:29.785471  145142 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:22:29.785479  145142 notify.go:220] Checking for updates...
	I0719 04:22:29.787820  145142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:22:29.788891  145142 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:22:29.789962  145142 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:22:29.791120  145142 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 04:22:29.792216  145142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:22:29.793437  145142 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:22:29.827725  145142 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 04:22:29.828880  145142 start.go:297] selected driver: kvm2
	I0719 04:22:29.828895  145142 start.go:901] validating driver "kvm2" against <nil>
	I0719 04:22:29.828906  145142 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:22:29.829651  145142 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:22:29.829720  145142 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 04:22:29.844753  145142 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 04:22:29.844844  145142 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 04:22:29.845270  145142 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:22:29.845527  145142 cni.go:84] Creating CNI manager for ""
	I0719 04:22:29.845544  145142 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 04:22:29.845554  145142 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 04:22:29.845637  145142 start.go:340] cluster config:
	{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0719 04:22:29.845736  145142 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:22:29.847611  145142 out.go:177] * Starting "ha-925161" primary control-plane node in "ha-925161" cluster
	I0719 04:22:29.848780  145142 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:22:29.848818  145142 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 04:22:29.848832  145142 cache.go:56] Caching tarball of preloaded images
	I0719 04:22:29.848919  145142 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:22:29.848933  145142 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:22:29.849365  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:22:29.849395  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json: {Name:mk42287f9f8916c94b7b3c67930dafa0c3559cb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:29.849568  145142 start.go:360] acquireMachinesLock for ha-925161: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:22:29.849609  145142 start.go:364] duration metric: took 21.401µs to acquireMachinesLock for "ha-925161"
	I0719 04:22:29.849633  145142 start.go:93] Provisioning new machine with config: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:22:29.849725  145142 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 04:22:29.851249  145142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:22:29.851419  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:22:29.851451  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:22:29.865955  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45139
	I0719 04:22:29.866418  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:22:29.867045  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:22:29.867066  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:22:29.867383  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:22:29.867589  145142 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:22:29.867778  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:29.867946  145142 start.go:159] libmachine.API.Create for "ha-925161" (driver="kvm2")
	I0719 04:22:29.867975  145142 client.go:168] LocalClient.Create starting
	I0719 04:22:29.868010  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem
	I0719 04:22:29.868110  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:22:29.868132  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:22:29.868194  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem
	I0719 04:22:29.868220  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:22:29.868234  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:22:29.868250  145142 main.go:141] libmachine: Running pre-create checks...
	I0719 04:22:29.868258  145142 main.go:141] libmachine: (ha-925161) Calling .PreCreateCheck
	I0719 04:22:29.868687  145142 main.go:141] libmachine: (ha-925161) Calling .GetConfigRaw
	I0719 04:22:29.869098  145142 main.go:141] libmachine: Creating machine...
	I0719 04:22:29.869118  145142 main.go:141] libmachine: (ha-925161) Calling .Create
	I0719 04:22:29.869252  145142 main.go:141] libmachine: (ha-925161) Creating KVM machine...
	I0719 04:22:29.870412  145142 main.go:141] libmachine: (ha-925161) DBG | found existing default KVM network
	I0719 04:22:29.871104  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:29.870959  145164 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c10}
	I0719 04:22:29.871125  145142 main.go:141] libmachine: (ha-925161) DBG | created network xml: 
	I0719 04:22:29.871137  145142 main.go:141] libmachine: (ha-925161) DBG | <network>
	I0719 04:22:29.871146  145142 main.go:141] libmachine: (ha-925161) DBG |   <name>mk-ha-925161</name>
	I0719 04:22:29.871155  145142 main.go:141] libmachine: (ha-925161) DBG |   <dns enable='no'/>
	I0719 04:22:29.871165  145142 main.go:141] libmachine: (ha-925161) DBG |   
	I0719 04:22:29.871177  145142 main.go:141] libmachine: (ha-925161) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0719 04:22:29.871188  145142 main.go:141] libmachine: (ha-925161) DBG |     <dhcp>
	I0719 04:22:29.871278  145142 main.go:141] libmachine: (ha-925161) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0719 04:22:29.871317  145142 main.go:141] libmachine: (ha-925161) DBG |     </dhcp>
	I0719 04:22:29.871343  145142 main.go:141] libmachine: (ha-925161) DBG |   </ip>
	I0719 04:22:29.871363  145142 main.go:141] libmachine: (ha-925161) DBG |   
	I0719 04:22:29.871375  145142 main.go:141] libmachine: (ha-925161) DBG | </network>
	I0719 04:22:29.871383  145142 main.go:141] libmachine: (ha-925161) DBG | 
	I0719 04:22:29.875939  145142 main.go:141] libmachine: (ha-925161) DBG | trying to create private KVM network mk-ha-925161 192.168.39.0/24...
	I0719 04:22:29.944824  145142 main.go:141] libmachine: (ha-925161) DBG | private KVM network mk-ha-925161 192.168.39.0/24 created
	I0719 04:22:29.944873  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:29.944745  145164 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:22:29.944887  145142 main.go:141] libmachine: (ha-925161) Setting up store path in /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161 ...
	I0719 04:22:29.944907  145142 main.go:141] libmachine: (ha-925161) Building disk image from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 04:22:29.944925  145142 main.go:141] libmachine: (ha-925161) Downloading /home/jenkins/minikube-integration/19302-122995/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:22:30.192232  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:30.192113  145164 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa...
	I0719 04:22:30.420050  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:30.419853  145164 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/ha-925161.rawdisk...
	I0719 04:22:30.420096  145142 main.go:141] libmachine: (ha-925161) DBG | Writing magic tar header
	I0719 04:22:30.420115  145142 main.go:141] libmachine: (ha-925161) DBG | Writing SSH key tar header
	I0719 04:22:30.420129  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:30.420040  145164 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161 ...
	I0719 04:22:30.420151  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161
	I0719 04:22:30.420301  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161 (perms=drwx------)
	I0719 04:22:30.420339  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines
	I0719 04:22:30.420356  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines (perms=drwxr-xr-x)
	I0719 04:22:30.420404  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:22:30.420431  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube (perms=drwxr-xr-x)
	I0719 04:22:30.420444  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995
	I0719 04:22:30.420459  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 04:22:30.420469  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home/jenkins
	I0719 04:22:30.420478  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995 (perms=drwxrwxr-x)
	I0719 04:22:30.420491  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 04:22:30.420499  145142 main.go:141] libmachine: (ha-925161) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 04:22:30.420513  145142 main.go:141] libmachine: (ha-925161) Creating domain...
	I0719 04:22:30.420535  145142 main.go:141] libmachine: (ha-925161) DBG | Checking permissions on dir: /home
	I0719 04:22:30.420546  145142 main.go:141] libmachine: (ha-925161) DBG | Skipping /home - not owner
	I0719 04:22:30.421634  145142 main.go:141] libmachine: (ha-925161) define libvirt domain using xml: 
	I0719 04:22:30.421652  145142 main.go:141] libmachine: (ha-925161) <domain type='kvm'>
	I0719 04:22:30.421661  145142 main.go:141] libmachine: (ha-925161)   <name>ha-925161</name>
	I0719 04:22:30.421669  145142 main.go:141] libmachine: (ha-925161)   <memory unit='MiB'>2200</memory>
	I0719 04:22:30.421681  145142 main.go:141] libmachine: (ha-925161)   <vcpu>2</vcpu>
	I0719 04:22:30.421685  145142 main.go:141] libmachine: (ha-925161)   <features>
	I0719 04:22:30.421690  145142 main.go:141] libmachine: (ha-925161)     <acpi/>
	I0719 04:22:30.421694  145142 main.go:141] libmachine: (ha-925161)     <apic/>
	I0719 04:22:30.421699  145142 main.go:141] libmachine: (ha-925161)     <pae/>
	I0719 04:22:30.421708  145142 main.go:141] libmachine: (ha-925161)     
	I0719 04:22:30.421712  145142 main.go:141] libmachine: (ha-925161)   </features>
	I0719 04:22:30.421718  145142 main.go:141] libmachine: (ha-925161)   <cpu mode='host-passthrough'>
	I0719 04:22:30.421726  145142 main.go:141] libmachine: (ha-925161)   
	I0719 04:22:30.421732  145142 main.go:141] libmachine: (ha-925161)   </cpu>
	I0719 04:22:30.421740  145142 main.go:141] libmachine: (ha-925161)   <os>
	I0719 04:22:30.421754  145142 main.go:141] libmachine: (ha-925161)     <type>hvm</type>
	I0719 04:22:30.421770  145142 main.go:141] libmachine: (ha-925161)     <boot dev='cdrom'/>
	I0719 04:22:30.421783  145142 main.go:141] libmachine: (ha-925161)     <boot dev='hd'/>
	I0719 04:22:30.421791  145142 main.go:141] libmachine: (ha-925161)     <bootmenu enable='no'/>
	I0719 04:22:30.421796  145142 main.go:141] libmachine: (ha-925161)   </os>
	I0719 04:22:30.421802  145142 main.go:141] libmachine: (ha-925161)   <devices>
	I0719 04:22:30.421807  145142 main.go:141] libmachine: (ha-925161)     <disk type='file' device='cdrom'>
	I0719 04:22:30.421819  145142 main.go:141] libmachine: (ha-925161)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/boot2docker.iso'/>
	I0719 04:22:30.421831  145142 main.go:141] libmachine: (ha-925161)       <target dev='hdc' bus='scsi'/>
	I0719 04:22:30.421846  145142 main.go:141] libmachine: (ha-925161)       <readonly/>
	I0719 04:22:30.421861  145142 main.go:141] libmachine: (ha-925161)     </disk>
	I0719 04:22:30.421870  145142 main.go:141] libmachine: (ha-925161)     <disk type='file' device='disk'>
	I0719 04:22:30.421878  145142 main.go:141] libmachine: (ha-925161)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 04:22:30.421889  145142 main.go:141] libmachine: (ha-925161)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/ha-925161.rawdisk'/>
	I0719 04:22:30.421896  145142 main.go:141] libmachine: (ha-925161)       <target dev='hda' bus='virtio'/>
	I0719 04:22:30.421901  145142 main.go:141] libmachine: (ha-925161)     </disk>
	I0719 04:22:30.421908  145142 main.go:141] libmachine: (ha-925161)     <interface type='network'>
	I0719 04:22:30.421914  145142 main.go:141] libmachine: (ha-925161)       <source network='mk-ha-925161'/>
	I0719 04:22:30.421924  145142 main.go:141] libmachine: (ha-925161)       <model type='virtio'/>
	I0719 04:22:30.421951  145142 main.go:141] libmachine: (ha-925161)     </interface>
	I0719 04:22:30.421974  145142 main.go:141] libmachine: (ha-925161)     <interface type='network'>
	I0719 04:22:30.421987  145142 main.go:141] libmachine: (ha-925161)       <source network='default'/>
	I0719 04:22:30.421997  145142 main.go:141] libmachine: (ha-925161)       <model type='virtio'/>
	I0719 04:22:30.422009  145142 main.go:141] libmachine: (ha-925161)     </interface>
	I0719 04:22:30.422018  145142 main.go:141] libmachine: (ha-925161)     <serial type='pty'>
	I0719 04:22:30.422027  145142 main.go:141] libmachine: (ha-925161)       <target port='0'/>
	I0719 04:22:30.422034  145142 main.go:141] libmachine: (ha-925161)     </serial>
	I0719 04:22:30.422049  145142 main.go:141] libmachine: (ha-925161)     <console type='pty'>
	I0719 04:22:30.422066  145142 main.go:141] libmachine: (ha-925161)       <target type='serial' port='0'/>
	I0719 04:22:30.422078  145142 main.go:141] libmachine: (ha-925161)     </console>
	I0719 04:22:30.422089  145142 main.go:141] libmachine: (ha-925161)     <rng model='virtio'>
	I0719 04:22:30.422101  145142 main.go:141] libmachine: (ha-925161)       <backend model='random'>/dev/random</backend>
	I0719 04:22:30.422110  145142 main.go:141] libmachine: (ha-925161)     </rng>
	I0719 04:22:30.422119  145142 main.go:141] libmachine: (ha-925161)     
	I0719 04:22:30.422128  145142 main.go:141] libmachine: (ha-925161)     
	I0719 04:22:30.422136  145142 main.go:141] libmachine: (ha-925161)   </devices>
	I0719 04:22:30.422149  145142 main.go:141] libmachine: (ha-925161) </domain>
	I0719 04:22:30.422162  145142 main.go:141] libmachine: (ha-925161) 
	I0719 04:22:30.426564  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:70:7c:b0 in network default
	I0719 04:22:30.427164  145142 main.go:141] libmachine: (ha-925161) Ensuring networks are active...
	I0719 04:22:30.427178  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:30.427848  145142 main.go:141] libmachine: (ha-925161) Ensuring network default is active
	I0719 04:22:30.428157  145142 main.go:141] libmachine: (ha-925161) Ensuring network mk-ha-925161 is active
	I0719 04:22:30.428726  145142 main.go:141] libmachine: (ha-925161) Getting domain xml...
	I0719 04:22:30.429504  145142 main.go:141] libmachine: (ha-925161) Creating domain...
	I0719 04:22:31.588719  145142 main.go:141] libmachine: (ha-925161) Waiting to get IP...
	I0719 04:22:31.589394  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:31.589737  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:31.589777  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:31.589730  145164 retry.go:31] will retry after 249.411961ms: waiting for machine to come up
	I0719 04:22:31.841250  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:31.841746  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:31.841771  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:31.841699  145164 retry.go:31] will retry after 263.722178ms: waiting for machine to come up
	I0719 04:22:32.107140  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:32.107503  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:32.107526  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:32.107459  145164 retry.go:31] will retry after 367.963801ms: waiting for machine to come up
	I0719 04:22:32.476968  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:32.477453  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:32.477475  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:32.477396  145164 retry.go:31] will retry after 461.391177ms: waiting for machine to come up
	I0719 04:22:32.939800  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:32.940202  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:32.940225  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:32.940166  145164 retry.go:31] will retry after 690.740962ms: waiting for machine to come up
	I0719 04:22:33.632541  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:33.632968  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:33.632990  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:33.632939  145164 retry.go:31] will retry after 870.685105ms: waiting for machine to come up
	I0719 04:22:34.505012  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:34.505426  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:34.505457  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:34.505371  145164 retry.go:31] will retry after 787.01465ms: waiting for machine to come up
	I0719 04:22:35.293999  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:35.294365  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:35.294398  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:35.294309  145164 retry.go:31] will retry after 1.058390976s: waiting for machine to come up
	I0719 04:22:36.354463  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:36.354995  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:36.355025  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:36.354941  145164 retry.go:31] will retry after 1.505541373s: waiting for machine to come up
	I0719 04:22:37.862043  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:37.862525  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:37.862547  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:37.862473  145164 retry.go:31] will retry after 1.957410467s: waiting for machine to come up
	I0719 04:22:39.822568  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:39.823050  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:39.823089  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:39.823001  145164 retry.go:31] will retry after 2.175599008s: waiting for machine to come up
	I0719 04:22:41.999787  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:42.000202  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:42.000233  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:42.000150  145164 retry.go:31] will retry after 2.207076605s: waiting for machine to come up
	I0719 04:22:44.210455  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:44.210888  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:44.210912  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:44.210840  145164 retry.go:31] will retry after 2.974664162s: waiting for machine to come up
	I0719 04:22:47.188508  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:47.189032  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find current IP address of domain ha-925161 in network mk-ha-925161
	I0719 04:22:47.189054  145142 main.go:141] libmachine: (ha-925161) DBG | I0719 04:22:47.188978  145164 retry.go:31] will retry after 3.753610745s: waiting for machine to come up
	I0719 04:22:50.944522  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:50.944989  145142 main.go:141] libmachine: (ha-925161) Found IP for machine: 192.168.39.246
	I0719 04:22:50.945009  145142 main.go:141] libmachine: (ha-925161) Reserving static IP address...
	I0719 04:22:50.945022  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has current primary IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:50.945472  145142 main.go:141] libmachine: (ha-925161) DBG | unable to find host DHCP lease matching {name: "ha-925161", mac: "52:54:00:15:c3:8c", ip: "192.168.39.246"} in network mk-ha-925161
	I0719 04:22:51.018725  145142 main.go:141] libmachine: (ha-925161) DBG | Getting to WaitForSSH function...
	I0719 04:22:51.018760  145142 main.go:141] libmachine: (ha-925161) Reserved static IP address: 192.168.39.246
	I0719 04:22:51.018774  145142 main.go:141] libmachine: (ha-925161) Waiting for SSH to be available...
	I0719 04:22:51.021353  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.021792  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.021821  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.021953  145142 main.go:141] libmachine: (ha-925161) DBG | Using SSH client type: external
	I0719 04:22:51.021980  145142 main.go:141] libmachine: (ha-925161) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa (-rw-------)
	I0719 04:22:51.022010  145142 main.go:141] libmachine: (ha-925161) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 04:22:51.022020  145142 main.go:141] libmachine: (ha-925161) DBG | About to run SSH command:
	I0719 04:22:51.022055  145142 main.go:141] libmachine: (ha-925161) DBG | exit 0
	I0719 04:22:51.145116  145142 main.go:141] libmachine: (ha-925161) DBG | SSH cmd err, output: <nil>: 
	I0719 04:22:51.145388  145142 main.go:141] libmachine: (ha-925161) KVM machine creation complete!
	I0719 04:22:51.145695  145142 main.go:141] libmachine: (ha-925161) Calling .GetConfigRaw
	I0719 04:22:51.146268  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:51.146475  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:51.146643  145142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 04:22:51.146660  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:22:51.147937  145142 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 04:22:51.147953  145142 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 04:22:51.147958  145142 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 04:22:51.147964  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.150250  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.150613  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.150639  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.150801  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.151003  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.151219  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.151391  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.151591  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.151841  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.151854  145142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 04:22:51.260174  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:22:51.260202  145142 main.go:141] libmachine: Detecting the provisioner...
	I0719 04:22:51.260213  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.262758  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.263152  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.263183  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.263360  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.263593  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.263774  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.263956  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.264129  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.264302  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.264312  145142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 04:22:51.369301  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 04:22:51.369395  145142 main.go:141] libmachine: found compatible host: buildroot
	I0719 04:22:51.369402  145142 main.go:141] libmachine: Provisioning with buildroot...
	I0719 04:22:51.369411  145142 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:22:51.369650  145142 buildroot.go:166] provisioning hostname "ha-925161"
	I0719 04:22:51.369677  145142 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:22:51.369925  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.372464  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.372803  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.372829  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.373018  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.373199  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.373367  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.373513  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.373696  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.373904  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.373920  145142 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-925161 && echo "ha-925161" | sudo tee /etc/hostname
	I0719 04:22:51.494103  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161
	
	I0719 04:22:51.494128  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.496673  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.497038  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.497078  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.497294  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.497484  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.497638  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.497755  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.497886  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.498050  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.498066  145142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-925161' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-925161/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-925161' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:22:51.613340  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:22:51.613369  145142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:22:51.613396  145142 buildroot.go:174] setting up certificates
	I0719 04:22:51.613410  145142 provision.go:84] configureAuth start
	I0719 04:22:51.613425  145142 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:22:51.613741  145142 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:22:51.616089  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.616425  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.616450  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.616644  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.618512  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.618790  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.618815  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.618929  145142 provision.go:143] copyHostCerts
	I0719 04:22:51.618970  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:22:51.619009  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:22:51.619021  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:22:51.619112  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:22:51.619208  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:22:51.619233  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:22:51.619243  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:22:51.619283  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:22:51.619389  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:22:51.619416  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:22:51.619426  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:22:51.619464  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:22:51.619532  145142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.ha-925161 san=[127.0.0.1 192.168.39.246 ha-925161 localhost minikube]
	I0719 04:22:51.663768  145142 provision.go:177] copyRemoteCerts
	I0719 04:22:51.663824  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:22:51.663850  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.666543  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.666863  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.666893  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.667035  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.667218  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.667391  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.667555  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:22:51.752405  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:22:51.752484  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:22:51.774151  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:22:51.774216  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:22:51.795937  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:22:51.796002  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0719 04:22:51.817323  145142 provision.go:87] duration metric: took 203.899941ms to configureAuth
	I0719 04:22:51.817351  145142 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:22:51.817524  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:22:51.817604  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:51.820662  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.821038  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:51.821085  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:51.821218  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:51.821432  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.821578  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:51.821743  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:51.821904  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:51.822074  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:51.822092  145142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:22:52.077205  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:22:52.077235  145142 main.go:141] libmachine: Checking connection to Docker...
	I0719 04:22:52.077245  145142 main.go:141] libmachine: (ha-925161) Calling .GetURL
	I0719 04:22:52.078520  145142 main.go:141] libmachine: (ha-925161) DBG | Using libvirt version 6000000
	I0719 04:22:52.080782  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.081163  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.081193  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.081359  145142 main.go:141] libmachine: Docker is up and running!
	I0719 04:22:52.081372  145142 main.go:141] libmachine: Reticulating splines...
	I0719 04:22:52.081380  145142 client.go:171] duration metric: took 22.213394389s to LocalClient.Create
	I0719 04:22:52.081404  145142 start.go:167] duration metric: took 22.213460023s to libmachine.API.Create "ha-925161"
	I0719 04:22:52.081414  145142 start.go:293] postStartSetup for "ha-925161" (driver="kvm2")
	I0719 04:22:52.081422  145142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:22:52.081439  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.081699  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:22:52.081730  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:52.083655  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.083904  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.083924  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.084069  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:52.084243  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.084386  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:52.084516  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:22:52.167142  145142 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:22:52.170928  145142 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:22:52.170953  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:22:52.171027  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:22:52.171144  145142 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:22:52.171159  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:22:52.171273  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:22:52.179990  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:22:52.201307  145142 start.go:296] duration metric: took 119.879736ms for postStartSetup
	I0719 04:22:52.201359  145142 main.go:141] libmachine: (ha-925161) Calling .GetConfigRaw
	I0719 04:22:52.201989  145142 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:22:52.204369  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.204678  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.204699  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.204974  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:22:52.205234  145142 start.go:128] duration metric: took 22.355495768s to createHost
	I0719 04:22:52.205264  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:52.207464  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.207757  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.207779  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.207942  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:52.208138  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.208320  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.208447  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:52.208586  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:22:52.208764  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:22:52.208782  145142 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:22:52.317415  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362972.292561103
	
	I0719 04:22:52.317439  145142 fix.go:216] guest clock: 1721362972.292561103
	I0719 04:22:52.317449  145142 fix.go:229] Guest: 2024-07-19 04:22:52.292561103 +0000 UTC Remote: 2024-07-19 04:22:52.205248354 +0000 UTC m=+22.458372431 (delta=87.312749ms)
	I0719 04:22:52.317509  145142 fix.go:200] guest clock delta is within tolerance: 87.312749ms
	I0719 04:22:52.317520  145142 start.go:83] releasing machines lock for "ha-925161", held for 22.46789615s
	I0719 04:22:52.317550  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.317844  145142 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:22:52.320096  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.320481  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.320494  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.320651  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.321136  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.321303  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:22:52.321397  145142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:22:52.321441  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:52.321569  145142 ssh_runner.go:195] Run: cat /version.json
	I0719 04:22:52.321593  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:22:52.323949  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.324156  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.324388  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.324415  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.324508  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:52.324679  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.324665  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:52.324744  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:52.324790  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:22:52.324883  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:52.324946  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:22:52.325030  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:22:52.325113  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:22:52.325235  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:22:52.401397  145142 ssh_runner.go:195] Run: systemctl --version
	I0719 04:22:52.437235  145142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:22:52.594635  145142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:22:52.600375  145142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:22:52.600438  145142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:22:52.614783  145142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:22:52.614809  145142 start.go:495] detecting cgroup driver to use...
	I0719 04:22:52.614879  145142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:22:52.630236  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:22:52.642797  145142 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:22:52.642858  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:22:52.654858  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:22:52.666830  145142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:22:52.781082  145142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:22:52.941832  145142 docker.go:233] disabling docker service ...
	I0719 04:22:52.941908  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:22:52.955307  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:22:52.967554  145142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:22:53.089302  145142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:22:53.210780  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:22:53.223427  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:22:53.240098  145142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:22:53.240168  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.249710  145142 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:22:53.249794  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.259149  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.268593  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.277902  145142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:22:53.287610  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.296814  145142 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.312257  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:22:53.321893  145142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:22:53.330295  145142 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 04:22:53.330338  145142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 04:22:53.341563  145142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:22:53.350032  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:22:53.467060  145142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:22:53.594661  145142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:22:53.594734  145142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:22:53.598831  145142 start.go:563] Will wait 60s for crictl version
	I0719 04:22:53.598882  145142 ssh_runner.go:195] Run: which crictl
	I0719 04:22:53.602229  145142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:22:53.635996  145142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:22:53.636094  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:22:53.661656  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:22:53.689824  145142 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:22:53.691225  145142 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:22:53.694282  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:53.694729  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:22:53.694748  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:22:53.694969  145142 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:22:53.698733  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:22:53.711205  145142 kubeadm.go:883] updating cluster {Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:22:53.711432  145142 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:22:53.711526  145142 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:22:53.743183  145142 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0719 04:22:53.743251  145142 ssh_runner.go:195] Run: which lz4
	I0719 04:22:53.746798  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0719 04:22:53.746880  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 04:22:53.750604  145142 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 04:22:53.750637  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0719 04:22:54.968122  145142 crio.go:462] duration metric: took 1.221260849s to copy over tarball
	I0719 04:22:54.968190  145142 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 04:22:57.078373  145142 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.110157022s)
	I0719 04:22:57.078410  145142 crio.go:469] duration metric: took 2.11026113s to extract the tarball
	I0719 04:22:57.078418  145142 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 04:22:57.116161  145142 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:22:57.164739  145142 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:22:57.164768  145142 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:22:57.164778  145142 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.30.3 crio true true} ...
	I0719 04:22:57.164964  145142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-925161 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:22:57.165049  145142 ssh_runner.go:195] Run: crio config
	I0719 04:22:57.211378  145142 cni.go:84] Creating CNI manager for ""
	I0719 04:22:57.211395  145142 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 04:22:57.211404  145142 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:22:57.211424  145142 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-925161 NodeName:ha-925161 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:22:57.211551  145142 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-925161"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:22:57.211579  145142 kube-vip.go:115] generating kube-vip config ...
	I0719 04:22:57.211621  145142 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:22:57.230247  145142 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:22:57.230345  145142 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:22:57.230399  145142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:22:57.243255  145142 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:22:57.243312  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 04:22:57.257333  145142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 04:22:57.272554  145142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:22:57.287789  145142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 04:22:57.303104  145142 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0719 04:22:57.318165  145142 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:22:57.321758  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:22:57.332926  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:22:57.442766  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:22:57.458401  145142 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161 for IP: 192.168.39.246
	I0719 04:22:57.458424  145142 certs.go:194] generating shared ca certs ...
	I0719 04:22:57.458440  145142 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.458619  145142 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:22:57.458672  145142 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:22:57.458685  145142 certs.go:256] generating profile certs ...
	I0719 04:22:57.458746  145142 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key
	I0719 04:22:57.458764  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt with IP's: []
	I0719 04:22:57.614806  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt ...
	I0719 04:22:57.614835  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt: {Name:mk2b285240478b195a743d5dbbf2e8b1205963d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.614999  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key ...
	I0719 04:22:57.615042  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key: {Name:mk5af0dd55a6ddee32443cac6901c4084cc1af27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.615123  145142 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.eb4c9cee
	I0719 04:22:57.615138  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.eb4c9cee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.254]
	I0719 04:22:57.792532  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.eb4c9cee ...
	I0719 04:22:57.792565  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.eb4c9cee: {Name:mkeae466d3f989c23944a81afdc9c59192b64e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.792733  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.eb4c9cee ...
	I0719 04:22:57.792744  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.eb4c9cee: {Name:mkffd606373cfbf144032e67b52d14d744d79f1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.792811  145142 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.eb4c9cee -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt
	I0719 04:22:57.792880  145142 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.eb4c9cee -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key
	I0719 04:22:57.792930  145142 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key
	I0719 04:22:57.792944  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt with IP's: []
	I0719 04:22:57.863362  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt ...
	I0719 04:22:57.863396  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt: {Name:mk5a457234641ef9d141c282246d2d8c5a6a8587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.863564  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key ...
	I0719 04:22:57.863574  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key: {Name:mk6de56d1e9e4ace980d9a078dcedb69f0c01037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:22:57.863648  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:22:57.863664  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:22:57.863677  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:22:57.863689  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:22:57.863698  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:22:57.863711  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:22:57.863722  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:22:57.863733  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:22:57.863776  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:22:57.863808  145142 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:22:57.863819  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:22:57.863842  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:22:57.863862  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:22:57.863882  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:22:57.863917  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:22:57.863945  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:22:57.863957  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:22:57.863969  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:22:57.864478  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:22:57.889058  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:22:57.910801  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:22:57.933050  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:22:57.954707  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 04:22:57.976926  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 04:22:57.998696  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:22:58.020435  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:22:58.041656  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:22:58.062439  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:22:58.083180  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:22:58.103892  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:22:58.118997  145142 ssh_runner.go:195] Run: openssl version
	I0719 04:22:58.124281  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:22:58.133860  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:22:58.137892  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:22:58.137938  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:22:58.143251  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:22:58.153127  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:22:58.162724  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:22:58.166937  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:22:58.166995  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:22:58.172114  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:22:58.181639  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:22:58.191344  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:22:58.195293  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:22:58.195344  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:22:58.200358  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:22:58.210174  145142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:22:58.213728  145142 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:22:58.213784  145142 kubeadm.go:392] StartCluster: {Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:22:58.213872  145142 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 04:22:58.213932  145142 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 04:22:58.261349  145142 cri.go:89] found id: ""
	I0719 04:22:58.261440  145142 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 04:22:58.272291  145142 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 04:22:58.284767  145142 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 04:22:58.295705  145142 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 04:22:58.295727  145142 kubeadm.go:157] found existing configuration files:
	
	I0719 04:22:58.295780  145142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 04:22:58.304678  145142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 04:22:58.304737  145142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 04:22:58.313231  145142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 04:22:58.321223  145142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 04:22:58.321284  145142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 04:22:58.329529  145142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 04:22:58.337801  145142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 04:22:58.337853  145142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 04:22:58.346107  145142 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 04:22:58.353960  145142 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 04:22:58.354010  145142 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 04:22:58.362072  145142 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 04:22:58.455163  145142 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 04:22:58.455292  145142 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 04:22:58.562041  145142 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 04:22:58.562203  145142 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 04:22:58.562316  145142 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 04:22:58.740762  145142 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 04:22:58.743009  145142 out.go:204]   - Generating certificates and keys ...
	I0719 04:22:58.743131  145142 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 04:22:58.743200  145142 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 04:22:59.292694  145142 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 04:22:59.399545  145142 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 04:22:59.579278  145142 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 04:22:59.901922  145142 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 04:22:59.986580  145142 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 04:22:59.986694  145142 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-925161 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0719 04:23:00.211063  145142 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 04:23:00.211259  145142 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-925161 localhost] and IPs [192.168.39.246 127.0.0.1 ::1]
	I0719 04:23:00.315632  145142 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 04:23:00.456874  145142 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 04:23:00.661314  145142 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 04:23:00.661380  145142 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 04:23:00.827429  145142 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 04:23:01.009407  145142 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 04:23:01.113224  145142 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 04:23:01.329786  145142 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 04:23:01.627231  145142 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 04:23:01.627729  145142 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 04:23:01.630104  145142 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 04:23:01.771182  145142 out.go:204]   - Booting up control plane ...
	I0719 04:23:01.771326  145142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 04:23:01.771440  145142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 04:23:01.771532  145142 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 04:23:01.771671  145142 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 04:23:01.771808  145142 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 04:23:01.771858  145142 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 04:23:01.797635  145142 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 04:23:01.797718  145142 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 04:23:02.300388  145142 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 504.569979ms
	I0719 04:23:02.300511  145142 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 04:23:08.370659  145142 kubeadm.go:310] [api-check] The API server is healthy after 6.07413465s
	I0719 04:23:08.382803  145142 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 04:23:08.397124  145142 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 04:23:08.438896  145142 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 04:23:08.439110  145142 kubeadm.go:310] [mark-control-plane] Marking the node ha-925161 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 04:23:08.455201  145142 kubeadm.go:310] [bootstrap-token] Using token: ncc8dk.18bi28qrzcrx8rop
	I0719 04:23:08.456786  145142 out.go:204]   - Configuring RBAC rules ...
	I0719 04:23:08.456935  145142 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 04:23:08.462217  145142 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 04:23:08.473602  145142 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 04:23:08.477175  145142 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 04:23:08.480146  145142 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 04:23:08.483162  145142 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 04:23:08.775785  145142 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 04:23:09.232911  145142 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 04:23:09.776508  145142 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 04:23:09.776533  145142 kubeadm.go:310] 
	I0719 04:23:09.776655  145142 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 04:23:09.776697  145142 kubeadm.go:310] 
	I0719 04:23:09.776800  145142 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 04:23:09.776812  145142 kubeadm.go:310] 
	I0719 04:23:09.776856  145142 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 04:23:09.776932  145142 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 04:23:09.777012  145142 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 04:23:09.777020  145142 kubeadm.go:310] 
	I0719 04:23:09.777104  145142 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 04:23:09.777117  145142 kubeadm.go:310] 
	I0719 04:23:09.777180  145142 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 04:23:09.777191  145142 kubeadm.go:310] 
	I0719 04:23:09.777233  145142 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 04:23:09.777342  145142 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 04:23:09.777427  145142 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 04:23:09.777434  145142 kubeadm.go:310] 
	I0719 04:23:09.777535  145142 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 04:23:09.777637  145142 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 04:23:09.777645  145142 kubeadm.go:310] 
	I0719 04:23:09.777755  145142 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ncc8dk.18bi28qrzcrx8rop \
	I0719 04:23:09.777886  145142 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 \
	I0719 04:23:09.777909  145142 kubeadm.go:310] 	--control-plane 
	I0719 04:23:09.777929  145142 kubeadm.go:310] 
	I0719 04:23:09.778038  145142 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 04:23:09.778050  145142 kubeadm.go:310] 
	I0719 04:23:09.778157  145142 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ncc8dk.18bi28qrzcrx8rop \
	I0719 04:23:09.778314  145142 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 
	I0719 04:23:09.778467  145142 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 04:23:09.778484  145142 cni.go:84] Creating CNI manager for ""
	I0719 04:23:09.778493  145142 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 04:23:09.780280  145142 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 04:23:09.781555  145142 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 04:23:09.786731  145142 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 04:23:09.786755  145142 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 04:23:09.804703  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 04:23:10.155467  145142 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 04:23:10.155538  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:10.155538  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-925161 minikube.k8s.io/updated_at=2024_07_19T04_23_10_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-925161 minikube.k8s.io/primary=true
	I0719 04:23:10.183982  145142 ops.go:34] apiserver oom_adj: -16
	I0719 04:23:10.354607  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:10.855068  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:11.354675  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:11.854872  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:12.355551  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:12.855569  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:13.354645  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:13.855579  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:14.355472  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:14.854888  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:15.355575  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:15.854963  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:16.355321  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:16.854770  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:17.354631  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:17.854931  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:18.354727  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:18.855480  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:19.354963  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:19.855677  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:20.355558  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:20.854930  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:21.354922  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:23:21.472004  145142 kubeadm.go:1113] duration metric: took 11.316526469s to wait for elevateKubeSystemPrivileges
	I0719 04:23:21.472045  145142 kubeadm.go:394] duration metric: took 23.258265944s to StartCluster
	I0719 04:23:21.472065  145142 settings.go:142] acquiring lock: {Name:mka29304fbead54bd9b698f9018edea7e59177cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:21.472152  145142 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:23:21.472844  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/kubeconfig: {Name:mk6e4a1b81f147a5c312ddde5acb372811581248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:21.473103  145142 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:23:21.473130  145142 start.go:241] waiting for startup goroutines ...
	I0719 04:23:21.473113  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 04:23:21.473171  145142 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 04:23:21.473266  145142 addons.go:69] Setting storage-provisioner=true in profile "ha-925161"
	I0719 04:23:21.473270  145142 addons.go:69] Setting default-storageclass=true in profile "ha-925161"
	I0719 04:23:21.473308  145142 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-925161"
	I0719 04:23:21.473325  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:23:21.473311  145142 addons.go:234] Setting addon storage-provisioner=true in "ha-925161"
	I0719 04:23:21.473420  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:23:21.473697  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.473726  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.473753  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.473786  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.488830  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46161
	I0719 04:23:21.489125  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39553
	I0719 04:23:21.489308  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.489565  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.489857  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.489883  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.490015  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.490039  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.490205  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.490326  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.490507  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:23:21.490737  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.490784  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.492853  145142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:23:21.493219  145142 kapi.go:59] client config for ha-925161: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key", CAFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 04:23:21.493756  145142 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 04:23:21.494012  145142 addons.go:234] Setting addon default-storageclass=true in "ha-925161"
	I0719 04:23:21.494061  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:23:21.494430  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.494476  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.505441  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
	I0719 04:23:21.505878  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.506323  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.506346  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.506672  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.506878  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:23:21.508773  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:23:21.508830  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46873
	I0719 04:23:21.509244  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.509764  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.509788  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.510171  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.510850  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:21.510896  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:21.511002  145142 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 04:23:21.512574  145142 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:23:21.512594  145142 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 04:23:21.512614  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:23:21.515483  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:21.515925  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:23:21.515951  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:21.516105  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:23:21.516288  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:23:21.516454  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:23:21.516577  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:23:21.526157  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39953
	I0719 04:23:21.526628  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:21.527126  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:21.527150  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:21.527494  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:21.527819  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:23:21.529402  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:23:21.529638  145142 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 04:23:21.529652  145142 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 04:23:21.529670  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:23:21.532635  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:21.533021  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:23:21.533087  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:21.533331  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:23:21.533531  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:23:21.533681  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:23:21.533846  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:23:21.580823  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 04:23:21.666715  145142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 04:23:21.678076  145142 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:23:22.028194  145142 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0719 04:23:22.035621  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.035648  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.036061  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.036080  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.036087  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.036095  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.036366  145142 main.go:141] libmachine: (ha-925161) DBG | Closing plugin on server side
	I0719 04:23:22.036368  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.036392  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.036530  145142 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0719 04:23:22.036541  145142 round_trippers.go:469] Request Headers:
	I0719 04:23:22.036550  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:23:22.036555  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:23:22.045742  145142 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:23:22.046518  145142 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 04:23:22.046542  145142 round_trippers.go:469] Request Headers:
	I0719 04:23:22.046552  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:23:22.046557  145142 round_trippers.go:473]     Content-Type: application/json
	I0719 04:23:22.046562  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:23:22.054881  145142 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:23:22.055065  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.055083  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.055366  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.055384  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.259372  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.259392  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.259666  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.259683  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.259690  145142 main.go:141] libmachine: Making call to close driver server
	I0719 04:23:22.259697  145142 main.go:141] libmachine: (ha-925161) Calling .Close
	I0719 04:23:22.259953  145142 main.go:141] libmachine: Successfully made call to close driver server
	I0719 04:23:22.259997  145142 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 04:23:22.259972  145142 main.go:141] libmachine: (ha-925161) DBG | Closing plugin on server side
	I0719 04:23:22.261635  145142 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0719 04:23:22.262889  145142 addons.go:510] duration metric: took 789.717661ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0719 04:23:22.262926  145142 start.go:246] waiting for cluster config update ...
	I0719 04:23:22.262938  145142 start.go:255] writing updated cluster config ...
	I0719 04:23:22.264365  145142 out.go:177] 
	I0719 04:23:22.265565  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:23:22.265634  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:23:22.267313  145142 out.go:177] * Starting "ha-925161-m02" control-plane node in "ha-925161" cluster
	I0719 04:23:22.268379  145142 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:23:22.268424  145142 cache.go:56] Caching tarball of preloaded images
	I0719 04:23:22.268525  145142 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:23:22.268538  145142 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:23:22.268627  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:23:22.268844  145142 start.go:360] acquireMachinesLock for ha-925161-m02: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:23:22.268907  145142 start.go:364] duration metric: took 36.053µs to acquireMachinesLock for "ha-925161-m02"
	I0719 04:23:22.268928  145142 start.go:93] Provisioning new machine with config: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:23:22.269013  145142 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0719 04:23:22.270308  145142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:23:22.270405  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:22.270435  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:22.285250  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I0719 04:23:22.285656  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:22.286103  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:22.286120  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:22.286450  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:22.286707  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetMachineName
	I0719 04:23:22.286870  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:22.287064  145142 start.go:159] libmachine.API.Create for "ha-925161" (driver="kvm2")
	I0719 04:23:22.287091  145142 client.go:168] LocalClient.Create starting
	I0719 04:23:22.287126  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem
	I0719 04:23:22.287168  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:23:22.287189  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:23:22.287260  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem
	I0719 04:23:22.287286  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:23:22.287301  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:23:22.287356  145142 main.go:141] libmachine: Running pre-create checks...
	I0719 04:23:22.287371  145142 main.go:141] libmachine: (ha-925161-m02) Calling .PreCreateCheck
	I0719 04:23:22.287545  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetConfigRaw
	I0719 04:23:22.287972  145142 main.go:141] libmachine: Creating machine...
	I0719 04:23:22.287988  145142 main.go:141] libmachine: (ha-925161-m02) Calling .Create
	I0719 04:23:22.288130  145142 main.go:141] libmachine: (ha-925161-m02) Creating KVM machine...
	I0719 04:23:22.289431  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found existing default KVM network
	I0719 04:23:22.289566  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found existing private KVM network mk-ha-925161
	I0719 04:23:22.289676  145142 main.go:141] libmachine: (ha-925161-m02) Setting up store path in /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02 ...
	I0719 04:23:22.289699  145142 main.go:141] libmachine: (ha-925161-m02) Building disk image from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 04:23:22.289752  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:22.289667  145551 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:23:22.289863  145142 main.go:141] libmachine: (ha-925161-m02) Downloading /home/jenkins/minikube-integration/19302-122995/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:23:22.524417  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:22.524279  145551 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa...
	I0719 04:23:22.566631  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:22.566532  145551 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/ha-925161-m02.rawdisk...
	I0719 04:23:22.566663  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Writing magic tar header
	I0719 04:23:22.566709  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Writing SSH key tar header
	I0719 04:23:22.566735  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:22.566643  145551 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02 ...
	I0719 04:23:22.566752  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02
	I0719 04:23:22.566761  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines
	I0719 04:23:22.566770  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02 (perms=drwx------)
	I0719 04:23:22.566779  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:23:22.566788  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines (perms=drwxr-xr-x)
	I0719 04:23:22.566805  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube (perms=drwxr-xr-x)
	I0719 04:23:22.566819  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995
	I0719 04:23:22.566833  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995 (perms=drwxrwxr-x)
	I0719 04:23:22.566845  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 04:23:22.566854  145142 main.go:141] libmachine: (ha-925161-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 04:23:22.566860  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 04:23:22.566867  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home/jenkins
	I0719 04:23:22.566873  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Checking permissions on dir: /home
	I0719 04:23:22.566881  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Skipping /home - not owner
	I0719 04:23:22.566911  145142 main.go:141] libmachine: (ha-925161-m02) Creating domain...
	I0719 04:23:22.567739  145142 main.go:141] libmachine: (ha-925161-m02) define libvirt domain using xml: 
	I0719 04:23:22.567749  145142 main.go:141] libmachine: (ha-925161-m02) <domain type='kvm'>
	I0719 04:23:22.567759  145142 main.go:141] libmachine: (ha-925161-m02)   <name>ha-925161-m02</name>
	I0719 04:23:22.567764  145142 main.go:141] libmachine: (ha-925161-m02)   <memory unit='MiB'>2200</memory>
	I0719 04:23:22.567769  145142 main.go:141] libmachine: (ha-925161-m02)   <vcpu>2</vcpu>
	I0719 04:23:22.567779  145142 main.go:141] libmachine: (ha-925161-m02)   <features>
	I0719 04:23:22.567786  145142 main.go:141] libmachine: (ha-925161-m02)     <acpi/>
	I0719 04:23:22.567793  145142 main.go:141] libmachine: (ha-925161-m02)     <apic/>
	I0719 04:23:22.567800  145142 main.go:141] libmachine: (ha-925161-m02)     <pae/>
	I0719 04:23:22.567806  145142 main.go:141] libmachine: (ha-925161-m02)     
	I0719 04:23:22.567814  145142 main.go:141] libmachine: (ha-925161-m02)   </features>
	I0719 04:23:22.567822  145142 main.go:141] libmachine: (ha-925161-m02)   <cpu mode='host-passthrough'>
	I0719 04:23:22.567831  145142 main.go:141] libmachine: (ha-925161-m02)   
	I0719 04:23:22.567836  145142 main.go:141] libmachine: (ha-925161-m02)   </cpu>
	I0719 04:23:22.567841  145142 main.go:141] libmachine: (ha-925161-m02)   <os>
	I0719 04:23:22.567850  145142 main.go:141] libmachine: (ha-925161-m02)     <type>hvm</type>
	I0719 04:23:22.567855  145142 main.go:141] libmachine: (ha-925161-m02)     <boot dev='cdrom'/>
	I0719 04:23:22.567865  145142 main.go:141] libmachine: (ha-925161-m02)     <boot dev='hd'/>
	I0719 04:23:22.567871  145142 main.go:141] libmachine: (ha-925161-m02)     <bootmenu enable='no'/>
	I0719 04:23:22.567876  145142 main.go:141] libmachine: (ha-925161-m02)   </os>
	I0719 04:23:22.567881  145142 main.go:141] libmachine: (ha-925161-m02)   <devices>
	I0719 04:23:22.567889  145142 main.go:141] libmachine: (ha-925161-m02)     <disk type='file' device='cdrom'>
	I0719 04:23:22.567901  145142 main.go:141] libmachine: (ha-925161-m02)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/boot2docker.iso'/>
	I0719 04:23:22.567914  145142 main.go:141] libmachine: (ha-925161-m02)       <target dev='hdc' bus='scsi'/>
	I0719 04:23:22.567922  145142 main.go:141] libmachine: (ha-925161-m02)       <readonly/>
	I0719 04:23:22.567929  145142 main.go:141] libmachine: (ha-925161-m02)     </disk>
	I0719 04:23:22.567950  145142 main.go:141] libmachine: (ha-925161-m02)     <disk type='file' device='disk'>
	I0719 04:23:22.567967  145142 main.go:141] libmachine: (ha-925161-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 04:23:22.567976  145142 main.go:141] libmachine: (ha-925161-m02)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/ha-925161-m02.rawdisk'/>
	I0719 04:23:22.567984  145142 main.go:141] libmachine: (ha-925161-m02)       <target dev='hda' bus='virtio'/>
	I0719 04:23:22.567989  145142 main.go:141] libmachine: (ha-925161-m02)     </disk>
	I0719 04:23:22.567996  145142 main.go:141] libmachine: (ha-925161-m02)     <interface type='network'>
	I0719 04:23:22.568003  145142 main.go:141] libmachine: (ha-925161-m02)       <source network='mk-ha-925161'/>
	I0719 04:23:22.568009  145142 main.go:141] libmachine: (ha-925161-m02)       <model type='virtio'/>
	I0719 04:23:22.568014  145142 main.go:141] libmachine: (ha-925161-m02)     </interface>
	I0719 04:23:22.568021  145142 main.go:141] libmachine: (ha-925161-m02)     <interface type='network'>
	I0719 04:23:22.568027  145142 main.go:141] libmachine: (ha-925161-m02)       <source network='default'/>
	I0719 04:23:22.568034  145142 main.go:141] libmachine: (ha-925161-m02)       <model type='virtio'/>
	I0719 04:23:22.568039  145142 main.go:141] libmachine: (ha-925161-m02)     </interface>
	I0719 04:23:22.568051  145142 main.go:141] libmachine: (ha-925161-m02)     <serial type='pty'>
	I0719 04:23:22.568059  145142 main.go:141] libmachine: (ha-925161-m02)       <target port='0'/>
	I0719 04:23:22.568066  145142 main.go:141] libmachine: (ha-925161-m02)     </serial>
	I0719 04:23:22.568087  145142 main.go:141] libmachine: (ha-925161-m02)     <console type='pty'>
	I0719 04:23:22.568102  145142 main.go:141] libmachine: (ha-925161-m02)       <target type='serial' port='0'/>
	I0719 04:23:22.568114  145142 main.go:141] libmachine: (ha-925161-m02)     </console>
	I0719 04:23:22.568124  145142 main.go:141] libmachine: (ha-925161-m02)     <rng model='virtio'>
	I0719 04:23:22.568135  145142 main.go:141] libmachine: (ha-925161-m02)       <backend model='random'>/dev/random</backend>
	I0719 04:23:22.568145  145142 main.go:141] libmachine: (ha-925161-m02)     </rng>
	I0719 04:23:22.568153  145142 main.go:141] libmachine: (ha-925161-m02)     
	I0719 04:23:22.568162  145142 main.go:141] libmachine: (ha-925161-m02)     
	I0719 04:23:22.568170  145142 main.go:141] libmachine: (ha-925161-m02)   </devices>
	I0719 04:23:22.568183  145142 main.go:141] libmachine: (ha-925161-m02) </domain>
	I0719 04:23:22.568212  145142 main.go:141] libmachine: (ha-925161-m02) 
	I0719 04:23:22.574696  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:4a:49:4a in network default
	I0719 04:23:22.575126  145142 main.go:141] libmachine: (ha-925161-m02) Ensuring networks are active...
	I0719 04:23:22.575140  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:22.575718  145142 main.go:141] libmachine: (ha-925161-m02) Ensuring network default is active
	I0719 04:23:22.575984  145142 main.go:141] libmachine: (ha-925161-m02) Ensuring network mk-ha-925161 is active
	I0719 04:23:22.576273  145142 main.go:141] libmachine: (ha-925161-m02) Getting domain xml...
	I0719 04:23:22.576903  145142 main.go:141] libmachine: (ha-925161-m02) Creating domain...
	I0719 04:23:23.822612  145142 main.go:141] libmachine: (ha-925161-m02) Waiting to get IP...
	I0719 04:23:23.823391  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:23.823835  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:23.823860  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:23.823808  145551 retry.go:31] will retry after 275.972565ms: waiting for machine to come up
	I0719 04:23:24.101445  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:24.101947  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:24.101976  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:24.101901  145551 retry.go:31] will retry after 260.725307ms: waiting for machine to come up
	I0719 04:23:24.364444  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:24.364955  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:24.364979  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:24.364916  145551 retry.go:31] will retry after 330.33525ms: waiting for machine to come up
	I0719 04:23:24.696430  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:24.696874  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:24.696900  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:24.696824  145551 retry.go:31] will retry after 565.545583ms: waiting for machine to come up
	I0719 04:23:25.264349  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:25.264830  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:25.264853  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:25.264796  145551 retry.go:31] will retry after 675.025996ms: waiting for machine to come up
	I0719 04:23:25.941773  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:25.942328  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:25.942354  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:25.942286  145551 retry.go:31] will retry after 916.575061ms: waiting for machine to come up
	I0719 04:23:26.860018  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:26.860488  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:26.860513  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:26.860431  145551 retry.go:31] will retry after 811.549285ms: waiting for machine to come up
	I0719 04:23:27.673180  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:27.673674  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:27.673700  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:27.673623  145551 retry.go:31] will retry after 1.317439306s: waiting for machine to come up
	I0719 04:23:28.993057  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:28.993522  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:28.993548  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:28.993475  145551 retry.go:31] will retry after 1.539873167s: waiting for machine to come up
	I0719 04:23:30.535187  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:30.535597  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:30.535624  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:30.535543  145551 retry.go:31] will retry after 1.962816348s: waiting for machine to come up
	I0719 04:23:32.500041  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:32.500533  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:32.500559  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:32.500487  145551 retry.go:31] will retry after 2.523138452s: waiting for machine to come up
	I0719 04:23:35.026265  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:35.026731  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:35.026758  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:35.026693  145551 retry.go:31] will retry after 2.642099523s: waiting for machine to come up
	I0719 04:23:37.670505  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:37.670903  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:37.670925  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:37.670868  145551 retry.go:31] will retry after 2.788794797s: waiting for machine to come up
	I0719 04:23:40.462661  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:40.463059  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find current IP address of domain ha-925161-m02 in network mk-ha-925161
	I0719 04:23:40.463087  145142 main.go:141] libmachine: (ha-925161-m02) DBG | I0719 04:23:40.463016  145551 retry.go:31] will retry after 5.427001191s: waiting for machine to come up
	I0719 04:23:45.893886  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:45.894410  145142 main.go:141] libmachine: (ha-925161-m02) Found IP for machine: 192.168.39.102
	I0719 04:23:45.894441  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has current primary IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:45.894450  145142 main.go:141] libmachine: (ha-925161-m02) Reserving static IP address...
	I0719 04:23:45.894803  145142 main.go:141] libmachine: (ha-925161-m02) DBG | unable to find host DHCP lease matching {name: "ha-925161-m02", mac: "52:54:00:17:48:0b", ip: "192.168.39.102"} in network mk-ha-925161
	I0719 04:23:45.966789  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Getting to WaitForSSH function...
	I0719 04:23:45.966826  145142 main.go:141] libmachine: (ha-925161-m02) Reserved static IP address: 192.168.39.102
	I0719 04:23:45.966840  145142 main.go:141] libmachine: (ha-925161-m02) Waiting for SSH to be available...
	I0719 04:23:45.969592  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:45.970039  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:45.970070  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:45.970209  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Using SSH client type: external
	I0719 04:23:45.970236  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa (-rw-------)
	I0719 04:23:45.970279  145142 main.go:141] libmachine: (ha-925161-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 04:23:45.970297  145142 main.go:141] libmachine: (ha-925161-m02) DBG | About to run SSH command:
	I0719 04:23:45.970315  145142 main.go:141] libmachine: (ha-925161-m02) DBG | exit 0
	I0719 04:23:46.097303  145142 main.go:141] libmachine: (ha-925161-m02) DBG | SSH cmd err, output: <nil>: 
	I0719 04:23:46.097520  145142 main.go:141] libmachine: (ha-925161-m02) KVM machine creation complete!
	I0719 04:23:46.097761  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetConfigRaw
	I0719 04:23:46.098492  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:46.098703  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:46.098898  145142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 04:23:46.098913  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:23:46.100199  145142 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 04:23:46.100218  145142 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 04:23:46.100226  145142 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 04:23:46.100233  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.102467  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.102798  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.102833  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.103085  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.103266  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.103426  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.103579  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.103731  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.103931  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.103941  145142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 04:23:46.208113  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:23:46.208139  145142 main.go:141] libmachine: Detecting the provisioner...
	I0719 04:23:46.208147  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.210813  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.211254  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.211280  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.211417  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.211599  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.211750  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.211896  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.212048  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.212210  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.212220  145142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 04:23:46.317502  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 04:23:46.317586  145142 main.go:141] libmachine: found compatible host: buildroot
	I0719 04:23:46.317599  145142 main.go:141] libmachine: Provisioning with buildroot...
	I0719 04:23:46.317612  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetMachineName
	I0719 04:23:46.317879  145142 buildroot.go:166] provisioning hostname "ha-925161-m02"
	I0719 04:23:46.317907  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetMachineName
	I0719 04:23:46.318279  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.321129  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.321504  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.321527  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.321709  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.321902  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.322063  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.322247  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.322394  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.322615  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.322634  145142 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-925161-m02 && echo "ha-925161-m02" | sudo tee /etc/hostname
	I0719 04:23:46.443271  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161-m02
	
	I0719 04:23:46.443315  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.446122  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.446458  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.446488  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.446756  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.446954  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.447142  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.447303  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.447439  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.447605  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.447622  145142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-925161-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-925161-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-925161-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:23:46.562033  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:23:46.562066  145142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:23:46.562095  145142 buildroot.go:174] setting up certificates
	I0719 04:23:46.562118  145142 provision.go:84] configureAuth start
	I0719 04:23:46.562136  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetMachineName
	I0719 04:23:46.562504  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:23:46.564747  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.565046  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.565090  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.565259  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.567404  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.567799  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.567827  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.567965  145142 provision.go:143] copyHostCerts
	I0719 04:23:46.568002  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:23:46.568043  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:23:46.568053  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:23:46.568148  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:23:46.568235  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:23:46.568258  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:23:46.568266  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:23:46.568293  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:23:46.568343  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:23:46.568360  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:23:46.568366  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:23:46.568389  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:23:46.568441  145142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.ha-925161-m02 san=[127.0.0.1 192.168.39.102 ha-925161-m02 localhost minikube]
	I0719 04:23:46.767791  145142 provision.go:177] copyRemoteCerts
	I0719 04:23:46.767850  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:23:46.767876  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.770577  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.770865  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.770890  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.771031  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.771229  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.771404  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.771542  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:23:46.855920  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:23:46.855989  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:23:46.879566  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:23:46.879642  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:23:46.901751  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:23:46.901832  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 04:23:46.923420  145142 provision.go:87] duration metric: took 361.284659ms to configureAuth
	I0719 04:23:46.923449  145142 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:23:46.923618  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:23:46.923690  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:46.926464  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.926812  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:46.926841  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:46.927022  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:46.927234  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.927409  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:46.927566  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:46.927760  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:46.927928  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:46.927942  145142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:23:47.180531  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:23:47.180559  145142 main.go:141] libmachine: Checking connection to Docker...
	I0719 04:23:47.180567  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetURL
	I0719 04:23:47.181999  145142 main.go:141] libmachine: (ha-925161-m02) DBG | Using libvirt version 6000000
	I0719 04:23:47.184247  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.184548  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.184577  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.184722  145142 main.go:141] libmachine: Docker is up and running!
	I0719 04:23:47.184737  145142 main.go:141] libmachine: Reticulating splines...
	I0719 04:23:47.184745  145142 client.go:171] duration metric: took 24.897645776s to LocalClient.Create
	I0719 04:23:47.184774  145142 start.go:167] duration metric: took 24.897712614s to libmachine.API.Create "ha-925161"
	I0719 04:23:47.184792  145142 start.go:293] postStartSetup for "ha-925161-m02" (driver="kvm2")
	I0719 04:23:47.184810  145142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:23:47.184839  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.185138  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:23:47.185170  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:47.187457  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.187795  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.187814  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.188012  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:47.188205  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.188368  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:47.188474  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:23:47.270775  145142 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:23:47.274946  145142 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:23:47.274973  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:23:47.275048  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:23:47.275138  145142 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:23:47.275149  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:23:47.275229  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:23:47.283727  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:23:47.305890  145142 start.go:296] duration metric: took 121.078307ms for postStartSetup
	I0719 04:23:47.305940  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetConfigRaw
	I0719 04:23:47.306507  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:23:47.309329  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.309738  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.309770  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.310048  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:23:47.310226  145142 start.go:128] duration metric: took 25.041200539s to createHost
	I0719 04:23:47.310250  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:47.312540  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.312846  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.312874  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.313037  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:47.313221  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.313416  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.313546  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:47.313686  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:23:47.313867  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0719 04:23:47.313886  145142 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:23:47.421517  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363027.393588832
	
	I0719 04:23:47.421551  145142 fix.go:216] guest clock: 1721363027.393588832
	I0719 04:23:47.421562  145142 fix.go:229] Guest: 2024-07-19 04:23:47.393588832 +0000 UTC Remote: 2024-07-19 04:23:47.310238048 +0000 UTC m=+77.563362110 (delta=83.350784ms)
	I0719 04:23:47.421603  145142 fix.go:200] guest clock delta is within tolerance: 83.350784ms
	I0719 04:23:47.421615  145142 start.go:83] releasing machines lock for "ha-925161-m02", held for 25.152696164s
	I0719 04:23:47.421643  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.421933  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:23:47.424529  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.424847  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.424874  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.427070  145142 out.go:177] * Found network options:
	I0719 04:23:47.428426  145142 out.go:177]   - NO_PROXY=192.168.39.246
	W0719 04:23:47.429480  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:23:47.429512  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.430013  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.430180  145142 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:23:47.430287  145142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:23:47.430334  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	W0719 04:23:47.430369  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:23:47.430452  145142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:23:47.430471  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:23:47.433224  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.433608  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.433647  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.433672  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.433810  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:47.434009  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.434144  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:47.434152  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:47.434170  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:47.434343  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:23:47.434346  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:23:47.434500  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:23:47.434650  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:23:47.434857  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:23:47.665429  145142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:23:47.670929  145142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:23:47.670995  145142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:23:47.685677  145142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:23:47.685705  145142 start.go:495] detecting cgroup driver to use...
	I0719 04:23:47.685773  145142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:23:47.701985  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:23:47.715043  145142 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:23:47.715109  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:23:47.727963  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:23:47.741231  145142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:23:47.875807  145142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:23:48.028981  145142 docker.go:233] disabling docker service ...
	I0719 04:23:48.029089  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:23:48.042094  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:23:48.053826  145142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:23:48.163798  145142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:23:48.284828  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:23:48.297864  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:23:48.315689  145142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:23:48.315752  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.325758  145142 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:23:48.325833  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.335811  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.345803  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.355829  145142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:23:48.365892  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.375462  145142 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.390864  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:23:48.400585  145142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:23:48.409893  145142 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 04:23:48.409952  145142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 04:23:48.422843  145142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:23:48.432050  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:23:48.553836  145142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:23:48.680828  145142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:23:48.680906  145142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:23:48.685127  145142 start.go:563] Will wait 60s for crictl version
	I0719 04:23:48.685196  145142 ssh_runner.go:195] Run: which crictl
	I0719 04:23:48.688577  145142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:23:48.725770  145142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:23:48.725843  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:23:48.752843  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:23:48.781297  145142 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:23:48.782551  145142 out.go:177]   - env NO_PROXY=192.168.39.246
	I0719 04:23:48.783615  145142 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:23:48.786383  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:48.786766  145142 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:23:35 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:23:48.786801  145142 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:23:48.787041  145142 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:23:48.790762  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:23:48.802032  145142 mustload.go:65] Loading cluster: ha-925161
	I0719 04:23:48.802203  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:23:48.802483  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:48.802516  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:48.817217  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38311
	I0719 04:23:48.817735  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:48.818268  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:48.818287  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:48.818587  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:48.818799  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:23:48.820214  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:23:48.820543  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:23:48.820571  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:23:48.835311  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41189
	I0719 04:23:48.835793  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:23:48.836295  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:23:48.836324  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:23:48.836663  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:23:48.836837  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:23:48.836992  145142 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161 for IP: 192.168.39.102
	I0719 04:23:48.837014  145142 certs.go:194] generating shared ca certs ...
	I0719 04:23:48.837032  145142 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:48.837193  145142 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:23:48.837232  145142 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:23:48.837242  145142 certs.go:256] generating profile certs ...
	I0719 04:23:48.837314  145142 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key
	I0719 04:23:48.837338  145142 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.fda840c7
	I0719 04:23:48.837355  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.fda840c7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.102 192.168.39.254]
	I0719 04:23:48.993970  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.fda840c7 ...
	I0719 04:23:48.994001  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.fda840c7: {Name:mk90575d4c455f79af428bec6bc32c43a03c8046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:48.994178  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.fda840c7 ...
	I0719 04:23:48.994191  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.fda840c7: {Name:mka50eebeeaf80e87f1fabc734dbcc58699400d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:23:48.994265  145142 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.fda840c7 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt
	I0719 04:23:48.994420  145142 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.fda840c7 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key
	I0719 04:23:48.994561  145142 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key
	I0719 04:23:48.994578  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:23:48.994591  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:23:48.994604  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:23:48.994617  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:23:48.994629  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:23:48.994640  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:23:48.994652  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:23:48.994665  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:23:48.994727  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:23:48.994755  145142 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:23:48.994765  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:23:48.994784  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:23:48.994806  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:23:48.994826  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:23:48.994860  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:23:48.994886  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:23:48.994899  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:23:48.994911  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:23:48.994943  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:23:48.997901  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:48.998309  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:23:48.998348  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:23:48.998487  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:23:48.998679  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:23:48.998849  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:23:48.998986  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:23:49.073502  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 04:23:49.078395  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 04:23:49.091138  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 04:23:49.095328  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0719 04:23:49.104999  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 04:23:49.108931  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 04:23:49.118880  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 04:23:49.122703  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0719 04:23:49.132049  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 04:23:49.135865  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 04:23:49.147625  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 04:23:49.154377  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0719 04:23:49.164354  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:23:49.191960  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:23:49.214646  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:23:49.236420  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:23:49.258237  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 04:23:49.280651  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 04:23:49.305378  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:23:49.327065  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:23:49.348251  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:23:49.369938  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:23:49.390999  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:23:49.412191  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 04:23:49.428401  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0719 04:23:49.443255  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 04:23:49.462359  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0719 04:23:49.478755  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 04:23:49.493969  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0719 04:23:49.509360  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 04:23:49.524139  145142 ssh_runner.go:195] Run: openssl version
	I0719 04:23:49.529351  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:23:49.539045  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:23:49.543097  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:23:49.543148  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:23:49.548609  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:23:49.558736  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:23:49.569099  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:23:49.573186  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:23:49.573243  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:23:49.578392  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:23:49.589548  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:23:49.599258  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:23:49.603298  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:23:49.603348  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:23:49.608653  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:23:49.618539  145142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:23:49.622126  145142 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:23:49.622181  145142 kubeadm.go:934] updating node {m02 192.168.39.102 8443 v1.30.3 crio true true} ...
	I0719 04:23:49.622285  145142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-925161-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:23:49.622311  145142 kube-vip.go:115] generating kube-vip config ...
	I0719 04:23:49.622351  145142 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:23:49.638753  145142 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:23:49.638820  145142 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:23:49.638878  145142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:23:49.647909  145142 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 04:23:49.647970  145142 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 04:23:49.656432  145142 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 04:23:49.656457  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:23:49.656534  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:23:49.656547  145142 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0719 04:23:49.656576  145142 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0719 04:23:49.660150  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 04:23:49.660173  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 04:23:50.579314  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:23:50.579424  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:23:50.583872  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 04:23:50.583917  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 04:24:00.473124  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:24:00.489514  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:24:00.489617  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:24:00.493642  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 04:24:00.493674  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 04:24:00.857157  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 04:24:00.865878  145142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 04:24:00.881358  145142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:24:00.896419  145142 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:24:00.911634  145142 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:24:00.915392  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:24:00.927170  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:24:01.036650  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:24:01.053699  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:24:01.054086  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:24:01.054135  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:24:01.069107  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0719 04:24:01.069669  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:24:01.070271  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:24:01.070302  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:24:01.070636  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:24:01.070859  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:24:01.071025  145142 start.go:317] joinCluster: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:24:01.071143  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 04:24:01.071164  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:24:01.074471  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:24:01.074994  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:24:01.075023  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:24:01.075173  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:24:01.075337  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:24:01.075523  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:24:01.075642  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:24:01.228767  145142 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:24:01.228815  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9brzgf.8utu0l810f8e3ass --discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-925161-m02 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I0719 04:24:22.827882  145142 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9brzgf.8utu0l810f8e3ass --discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-925161-m02 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (21.59903646s)
	I0719 04:24:22.827926  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 04:24:23.438495  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-925161-m02 minikube.k8s.io/updated_at=2024_07_19T04_24_23_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-925161 minikube.k8s.io/primary=false
	I0719 04:24:23.559514  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-925161-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 04:24:23.679031  145142 start.go:319] duration metric: took 22.608003168s to joinCluster
	I0719 04:24:23.679137  145142 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:24:23.679441  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:24:23.680758  145142 out.go:177] * Verifying Kubernetes components...
	I0719 04:24:23.682098  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:24:23.924747  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:24:23.982153  145142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:24:23.982537  145142 kapi.go:59] client config for ha-925161: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key", CAFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 04:24:23.982657  145142 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0719 04:24:23.982968  145142 node_ready.go:35] waiting up to 6m0s for node "ha-925161-m02" to be "Ready" ...
	I0719 04:24:23.983126  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:23.983138  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:23.983153  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:23.983162  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:23.995423  145142 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 04:24:24.484069  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:24.484102  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:24.484113  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:24.484119  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:24.500110  145142 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0719 04:24:24.983632  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:24.983664  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:24.983677  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:24.983683  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:24.987453  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:25.483529  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:25.483552  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:25.483563  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:25.483570  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:25.486570  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:25.984092  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:25.984114  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:25.984122  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:25.984127  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:25.986806  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:25.987500  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:26.484135  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:26.484155  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:26.484164  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:26.484168  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:26.487748  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:26.983423  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:26.983449  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:26.983461  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:26.983477  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:26.986210  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:27.483515  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:27.483535  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:27.483543  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:27.483547  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:27.486181  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:27.983445  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:27.983470  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:27.983481  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:27.983487  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:27.986156  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:28.484084  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:28.484105  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:28.484112  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:28.484118  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:28.487235  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:28.487884  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:28.984128  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:28.984151  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:28.984159  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:28.984164  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:28.988241  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:24:29.483738  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:29.483765  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:29.483777  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:29.483783  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:29.486637  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:29.983290  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:29.983317  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:29.983328  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:29.983332  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:29.986486  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:30.483448  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:30.483470  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:30.483478  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:30.483481  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:30.486677  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:30.983547  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:30.983568  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:30.983575  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:30.983580  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:30.985837  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:30.986267  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:31.484210  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:31.484231  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:31.484239  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:31.484243  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:31.487434  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:31.983236  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:31.983258  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:31.983267  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:31.983273  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:31.986453  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:32.483795  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:32.483817  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:32.483826  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:32.483831  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:32.487213  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:32.983266  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:32.983288  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:32.983296  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:32.983301  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:32.985895  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:32.986511  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:33.483895  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:33.483918  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:33.483926  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:33.483930  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:33.487091  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:33.983991  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:33.984013  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:33.984021  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:33.984025  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:33.988363  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:24:34.483908  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:34.483936  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:34.483948  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:34.483955  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:34.487526  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:34.983182  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:34.983207  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:34.983215  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:34.983220  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:34.986323  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:34.986816  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:35.483191  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:35.483215  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:35.483224  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:35.483228  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:35.486266  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:35.983355  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:35.983376  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:35.983385  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:35.983389  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:35.986382  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:36.483867  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:36.483914  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:36.483927  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:36.483933  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:36.487876  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:36.983941  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:36.983964  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:36.983973  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:36.983975  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:36.987130  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:36.987656  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:37.483528  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:37.483549  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:37.483558  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:37.483564  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:37.486511  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:37.983345  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:37.983366  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:37.983373  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:37.983380  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:37.990566  145142 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:24:38.483322  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:38.483354  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:38.483363  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:38.483368  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:38.486402  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:38.984161  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:38.984183  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:38.984191  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:38.984194  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:38.987852  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:38.988478  145142 node_ready.go:53] node "ha-925161-m02" has status "Ready":"False"
	I0719 04:24:39.483934  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:39.483957  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:39.483965  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:39.483968  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:39.486746  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:39.983627  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:39.983653  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:39.983661  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:39.983666  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:39.987134  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:40.483388  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:40.483413  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:40.483422  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:40.483427  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:40.486525  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:40.984049  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:40.984071  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:40.984079  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:40.984082  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:40.987243  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:41.483499  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:41.483521  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.483529  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.483532  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.486473  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.487029  145142 node_ready.go:49] node "ha-925161-m02" has status "Ready":"True"
	I0719 04:24:41.487047  145142 node_ready.go:38] duration metric: took 17.504036182s for node "ha-925161-m02" to be "Ready" ...
	I0719 04:24:41.487055  145142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:24:41.487155  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:41.487166  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.487178  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.487187  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.491881  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:24:41.497481  145142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.497561  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7wzcg
	I0719 04:24:41.497570  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.497577  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.497582  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.500114  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.500671  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.500687  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.500695  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.500700  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.503362  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.506243  145142 pod_ready.go:92] pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.506264  145142 pod_ready.go:81] duration metric: took 8.757705ms for pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.506273  145142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.506325  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hwdsq
	I0719 04:24:41.506332  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.506340  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.506343  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.508774  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.509717  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.509734  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.509741  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.509745  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.511828  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.512452  145142 pod_ready.go:92] pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.512466  145142 pod_ready.go:81] duration metric: took 6.187276ms for pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.512474  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.512520  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161
	I0719 04:24:41.512527  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.512533  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.512537  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.514760  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.515247  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.515261  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.515268  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.515273  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.517392  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.518029  145142 pod_ready.go:92] pod "etcd-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.518040  145142 pod_ready.go:81] duration metric: took 5.560858ms for pod "etcd-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.518062  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.518108  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161-m02
	I0719 04:24:41.518117  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.518129  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.518137  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.520250  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.520719  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:41.520731  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.520737  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.520741  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.522882  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.523335  145142 pod_ready.go:92] pod "etcd-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.523350  145142 pod_ready.go:81] duration metric: took 5.280299ms for pod "etcd-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.523363  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.683694  145142 request.go:629] Waited for 160.274101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161
	I0719 04:24:41.683768  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161
	I0719 04:24:41.683776  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.683784  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.683789  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.686762  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:41.883724  145142 request.go:629] Waited for 196.348187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.883811  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:41.883818  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:41.883826  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:41.883830  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:41.886885  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:41.887451  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:41.887473  145142 pod_ready.go:81] duration metric: took 364.101211ms for pod "kube-apiserver-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:41.887482  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.083505  145142 request.go:629] Waited for 195.9553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m02
	I0719 04:24:42.083574  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m02
	I0719 04:24:42.083580  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.083588  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.083595  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.087185  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:42.284189  145142 request.go:629] Waited for 196.390812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:42.284250  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:42.284256  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.284267  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.284273  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.287216  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:42.287756  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:42.287775  145142 pod_ready.go:81] duration metric: took 400.286107ms for pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.287785  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.484342  145142 request.go:629] Waited for 196.491923ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161
	I0719 04:24:42.484401  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161
	I0719 04:24:42.484406  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.484414  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.484417  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.487884  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:42.683962  145142 request.go:629] Waited for 195.25386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:42.684032  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:42.684039  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.684054  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.684061  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.687387  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:42.687963  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:42.687981  145142 pod_ready.go:81] duration metric: took 400.190541ms for pod "kube-controller-manager-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.687992  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:42.884148  145142 request.go:629] Waited for 196.059016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m02
	I0719 04:24:42.884220  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m02
	I0719 04:24:42.884227  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:42.884241  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:42.884248  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:42.887682  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.083653  145142 request.go:629] Waited for 195.282224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:43.083743  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:43.083749  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.083772  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.083791  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.086880  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.088769  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:43.088788  145142 pod_ready.go:81] duration metric: took 400.789348ms for pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.088798  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8dbqt" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.283909  145142 request.go:629] Waited for 195.041931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dbqt
	I0719 04:24:43.283990  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dbqt
	I0719 04:24:43.283995  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.284001  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.284006  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.287323  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.484238  145142 request.go:629] Waited for 196.366124ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:43.484313  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:43.484320  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.484329  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.484336  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.487830  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.488553  145142 pod_ready.go:92] pod "kube-proxy-8dbqt" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:43.488576  145142 pod_ready.go:81] duration metric: took 399.770059ms for pod "kube-proxy-8dbqt" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.488589  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6df4" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.683505  145142 request.go:629] Waited for 194.836143ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6df4
	I0719 04:24:43.683582  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6df4
	I0719 04:24:43.683587  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.683596  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.683601  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.686684  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:43.884063  145142 request.go:629] Waited for 196.777643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:43.884159  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:43.884165  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:43.884175  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:43.884180  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:43.887036  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:43.887613  145142 pod_ready.go:92] pod "kube-proxy-s6df4" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:43.887632  145142 pod_ready.go:81] duration metric: took 399.029983ms for pod "kube-proxy-s6df4" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:43.887644  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:44.083814  145142 request.go:629] Waited for 196.092093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161
	I0719 04:24:44.083875  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161
	I0719 04:24:44.083880  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.083888  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.083891  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.086868  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:44.283811  145142 request.go:629] Waited for 196.379807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:44.283868  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:24:44.283874  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.283887  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.283895  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.287178  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:44.287810  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:44.287832  145142 pod_ready.go:81] duration metric: took 400.18128ms for pod "kube-scheduler-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:44.287843  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:44.483879  145142 request.go:629] Waited for 195.944853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m02
	I0719 04:24:44.483959  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m02
	I0719 04:24:44.483968  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.483983  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.483991  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.486930  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:44.684019  145142 request.go:629] Waited for 196.375072ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:44.684110  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:24:44.684119  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.684127  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.684132  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.687081  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:24:44.687679  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:24:44.687700  145142 pod_ready.go:81] duration metric: took 399.847674ms for pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:24:44.687711  145142 pod_ready.go:38] duration metric: took 3.200605814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:24:44.687729  145142 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:24:44.687795  145142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:24:44.702903  145142 api_server.go:72] duration metric: took 21.023722699s to wait for apiserver process to appear ...
	I0719 04:24:44.702931  145142 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:24:44.702955  145142 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0719 04:24:44.712256  145142 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0719 04:24:44.712320  145142 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0719 04:24:44.712327  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.712335  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.712340  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.713127  145142 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 04:24:44.713222  145142 api_server.go:141] control plane version: v1.30.3
	I0719 04:24:44.713237  145142 api_server.go:131] duration metric: took 10.299058ms to wait for apiserver health ...
	I0719 04:24:44.713245  145142 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:24:44.883646  145142 request.go:629] Waited for 170.322673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:44.883704  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:44.883711  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:44.883719  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:44.883726  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:44.889407  145142 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:24:44.893381  145142 system_pods.go:59] 17 kube-system pods found
	I0719 04:24:44.893406  145142 system_pods.go:61] "coredns-7db6d8ff4d-7wzcg" [a434f69a-903d-4961-a54c-9a85cbc694b1] Running
	I0719 04:24:44.893411  145142 system_pods.go:61] "coredns-7db6d8ff4d-hwdsq" [894f9528-78da-4cae-9ec6-8e82a3e73264] Running
	I0719 04:24:44.893415  145142 system_pods.go:61] "etcd-ha-925161" [35b14af9-6e7d-4e5c-8c43-fa427109cde3] Running
	I0719 04:24:44.893419  145142 system_pods.go:61] "etcd-ha-925161-m02" [51f60536-03dc-4426-ac13-9d2ec33275f7] Running
	I0719 04:24:44.893422  145142 system_pods.go:61] "kindnet-dkctc" [4ec93698-4a91-44fa-a37f-405bf1a5fa95] Running
	I0719 04:24:44.893424  145142 system_pods.go:61] "kindnet-fsr5f" [988e1118-927a-4468-ba25-3a78d8d06919] Running
	I0719 04:24:44.893428  145142 system_pods.go:61] "kube-apiserver-ha-925161" [1c56f8e6-beb8-4dcc-ba56-5097516043a6] Running
	I0719 04:24:44.893432  145142 system_pods.go:61] "kube-apiserver-ha-925161-m02" [ceaa5f20-d023-482a-9905-54f8bc47da20] Running
	I0719 04:24:44.893436  145142 system_pods.go:61] "kube-controller-manager-ha-925161" [337e75e4-92e9-48fd-a46a-73ce174b4995] Running
	I0719 04:24:44.893439  145142 system_pods.go:61] "kube-controller-manager-ha-925161-m02" [d2d234a3-a18f-4618-9b77-4bcf771463b8] Running
	I0719 04:24:44.893444  145142 system_pods.go:61] "kube-proxy-8dbqt" [cd11aac3-62df-4603-8102-3384bcc100f1] Running
	I0719 04:24:44.893450  145142 system_pods.go:61] "kube-proxy-s6df4" [3373d2d8-4189-48a0-aefc-2ad0511b2a6b] Running
	I0719 04:24:44.893453  145142 system_pods.go:61] "kube-scheduler-ha-925161" [6c1c9f30-93c9-4def-b54e-97b8e27cd12b] Running
	I0719 04:24:44.893456  145142 system_pods.go:61] "kube-scheduler-ha-925161-m02" [60ea2e22-0456-40bc-bddd-32b6737350b3] Running
	I0719 04:24:44.893459  145142 system_pods.go:61] "kube-vip-ha-925161" [8d01a874-336e-476c-b079-852250b3bbcd] Running
	I0719 04:24:44.893462  145142 system_pods.go:61] "kube-vip-ha-925161-m02" [0cb6b1ed-566b-4f64-903b-5af108816970] Running
	I0719 04:24:44.893467  145142 system_pods.go:61] "storage-provisioner" [bf27da3d-f736-4742-9af5-2c0a024075ec] Running
	I0719 04:24:44.893473  145142 system_pods.go:74] duration metric: took 180.220665ms to wait for pod list to return data ...
	I0719 04:24:44.893483  145142 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:24:45.083908  145142 request.go:629] Waited for 190.345344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:24:45.083977  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:24:45.083985  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:45.083996  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:45.084003  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:45.087061  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:45.087310  145142 default_sa.go:45] found service account: "default"
	I0719 04:24:45.087332  145142 default_sa.go:55] duration metric: took 193.841784ms for default service account to be created ...
	I0719 04:24:45.087351  145142 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:24:45.283715  145142 request.go:629] Waited for 196.280501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:45.283788  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:24:45.283796  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:45.283804  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:45.283809  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:45.288696  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:24:45.293002  145142 system_pods.go:86] 17 kube-system pods found
	I0719 04:24:45.293031  145142 system_pods.go:89] "coredns-7db6d8ff4d-7wzcg" [a434f69a-903d-4961-a54c-9a85cbc694b1] Running
	I0719 04:24:45.293039  145142 system_pods.go:89] "coredns-7db6d8ff4d-hwdsq" [894f9528-78da-4cae-9ec6-8e82a3e73264] Running
	I0719 04:24:45.293045  145142 system_pods.go:89] "etcd-ha-925161" [35b14af9-6e7d-4e5c-8c43-fa427109cde3] Running
	I0719 04:24:45.293051  145142 system_pods.go:89] "etcd-ha-925161-m02" [51f60536-03dc-4426-ac13-9d2ec33275f7] Running
	I0719 04:24:45.293057  145142 system_pods.go:89] "kindnet-dkctc" [4ec93698-4a91-44fa-a37f-405bf1a5fa95] Running
	I0719 04:24:45.293073  145142 system_pods.go:89] "kindnet-fsr5f" [988e1118-927a-4468-ba25-3a78d8d06919] Running
	I0719 04:24:45.293080  145142 system_pods.go:89] "kube-apiserver-ha-925161" [1c56f8e6-beb8-4dcc-ba56-5097516043a6] Running
	I0719 04:24:45.293087  145142 system_pods.go:89] "kube-apiserver-ha-925161-m02" [ceaa5f20-d023-482a-9905-54f8bc47da20] Running
	I0719 04:24:45.293094  145142 system_pods.go:89] "kube-controller-manager-ha-925161" [337e75e4-92e9-48fd-a46a-73ce174b4995] Running
	I0719 04:24:45.293101  145142 system_pods.go:89] "kube-controller-manager-ha-925161-m02" [d2d234a3-a18f-4618-9b77-4bcf771463b8] Running
	I0719 04:24:45.293117  145142 system_pods.go:89] "kube-proxy-8dbqt" [cd11aac3-62df-4603-8102-3384bcc100f1] Running
	I0719 04:24:45.293125  145142 system_pods.go:89] "kube-proxy-s6df4" [3373d2d8-4189-48a0-aefc-2ad0511b2a6b] Running
	I0719 04:24:45.293131  145142 system_pods.go:89] "kube-scheduler-ha-925161" [6c1c9f30-93c9-4def-b54e-97b8e27cd12b] Running
	I0719 04:24:45.293138  145142 system_pods.go:89] "kube-scheduler-ha-925161-m02" [60ea2e22-0456-40bc-bddd-32b6737350b3] Running
	I0719 04:24:45.293145  145142 system_pods.go:89] "kube-vip-ha-925161" [8d01a874-336e-476c-b079-852250b3bbcd] Running
	I0719 04:24:45.293151  145142 system_pods.go:89] "kube-vip-ha-925161-m02" [0cb6b1ed-566b-4f64-903b-5af108816970] Running
	I0719 04:24:45.293157  145142 system_pods.go:89] "storage-provisioner" [bf27da3d-f736-4742-9af5-2c0a024075ec] Running
	I0719 04:24:45.293168  145142 system_pods.go:126] duration metric: took 205.808287ms to wait for k8s-apps to be running ...
	I0719 04:24:45.293180  145142 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:24:45.293234  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:24:45.306948  145142 system_svc.go:56] duration metric: took 13.758933ms WaitForService to wait for kubelet
	I0719 04:24:45.306981  145142 kubeadm.go:582] duration metric: took 21.627805849s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:24:45.307006  145142 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:24:45.484291  145142 request.go:629] Waited for 177.207278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0719 04:24:45.484368  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0719 04:24:45.484376  145142 round_trippers.go:469] Request Headers:
	I0719 04:24:45.484386  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:24:45.484396  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:24:45.487559  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:24:45.488510  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:24:45.488533  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:24:45.488548  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:24:45.488552  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:24:45.488558  145142 node_conditions.go:105] duration metric: took 181.546937ms to run NodePressure ...
	I0719 04:24:45.488572  145142 start.go:241] waiting for startup goroutines ...
	I0719 04:24:45.488604  145142 start.go:255] writing updated cluster config ...
	I0719 04:24:45.490487  145142 out.go:177] 
	I0719 04:24:45.491857  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:24:45.492021  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:24:45.493666  145142 out.go:177] * Starting "ha-925161-m03" control-plane node in "ha-925161" cluster
	I0719 04:24:45.494700  145142 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:24:45.494718  145142 cache.go:56] Caching tarball of preloaded images
	I0719 04:24:45.494818  145142 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:24:45.494831  145142 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:24:45.494912  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:24:45.495065  145142 start.go:360] acquireMachinesLock for ha-925161-m03: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:24:45.495118  145142 start.go:364] duration metric: took 31.277µs to acquireMachinesLock for "ha-925161-m03"
	I0719 04:24:45.495140  145142 start.go:93] Provisioning new machine with config: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:24:45.495233  145142 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0719 04:24:45.496679  145142 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:24:45.496756  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:24:45.496794  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:24:45.512273  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40107
	I0719 04:24:45.512703  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:24:45.513189  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:24:45.513209  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:24:45.513532  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:24:45.513756  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetMachineName
	I0719 04:24:45.513896  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:24:45.514043  145142 start.go:159] libmachine.API.Create for "ha-925161" (driver="kvm2")
	I0719 04:24:45.514078  145142 client.go:168] LocalClient.Create starting
	I0719 04:24:45.514113  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem
	I0719 04:24:45.514150  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:24:45.514167  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:24:45.514234  145142 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem
	I0719 04:24:45.514256  145142 main.go:141] libmachine: Decoding PEM data...
	I0719 04:24:45.514269  145142 main.go:141] libmachine: Parsing certificate...
	I0719 04:24:45.514293  145142 main.go:141] libmachine: Running pre-create checks...
	I0719 04:24:45.514304  145142 main.go:141] libmachine: (ha-925161-m03) Calling .PreCreateCheck
	I0719 04:24:45.514493  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetConfigRaw
	I0719 04:24:45.514962  145142 main.go:141] libmachine: Creating machine...
	I0719 04:24:45.514981  145142 main.go:141] libmachine: (ha-925161-m03) Calling .Create
	I0719 04:24:45.515160  145142 main.go:141] libmachine: (ha-925161-m03) Creating KVM machine...
	I0719 04:24:45.516466  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found existing default KVM network
	I0719 04:24:45.516574  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found existing private KVM network mk-ha-925161
	I0719 04:24:45.516795  145142 main.go:141] libmachine: (ha-925161-m03) Setting up store path in /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03 ...
	I0719 04:24:45.516819  145142 main.go:141] libmachine: (ha-925161-m03) Building disk image from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 04:24:45.516872  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:45.516768  145993 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:24:45.516968  145142 main.go:141] libmachine: (ha-925161-m03) Downloading /home/jenkins/minikube-integration/19302-122995/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:24:45.748018  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:45.747871  145993 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa...
	I0719 04:24:45.793443  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:45.793312  145993 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/ha-925161-m03.rawdisk...
	I0719 04:24:45.793472  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Writing magic tar header
	I0719 04:24:45.793482  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Writing SSH key tar header
	I0719 04:24:45.793493  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:45.793428  145993 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03 ...
	I0719 04:24:45.793583  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03
	I0719 04:24:45.793605  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines
	I0719 04:24:45.793617  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03 (perms=drwx------)
	I0719 04:24:45.793631  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:24:45.793647  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995
	I0719 04:24:45.793659  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 04:24:45.793672  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines (perms=drwxr-xr-x)
	I0719 04:24:45.793690  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube (perms=drwxr-xr-x)
	I0719 04:24:45.793701  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995 (perms=drwxrwxr-x)
	I0719 04:24:45.793713  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home/jenkins
	I0719 04:24:45.793730  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Checking permissions on dir: /home
	I0719 04:24:45.793743  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 04:24:45.793754  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Skipping /home - not owner
	I0719 04:24:45.793768  145142 main.go:141] libmachine: (ha-925161-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 04:24:45.793778  145142 main.go:141] libmachine: (ha-925161-m03) Creating domain...
	I0719 04:24:45.794631  145142 main.go:141] libmachine: (ha-925161-m03) define libvirt domain using xml: 
	I0719 04:24:45.794657  145142 main.go:141] libmachine: (ha-925161-m03) <domain type='kvm'>
	I0719 04:24:45.794673  145142 main.go:141] libmachine: (ha-925161-m03)   <name>ha-925161-m03</name>
	I0719 04:24:45.794681  145142 main.go:141] libmachine: (ha-925161-m03)   <memory unit='MiB'>2200</memory>
	I0719 04:24:45.794712  145142 main.go:141] libmachine: (ha-925161-m03)   <vcpu>2</vcpu>
	I0719 04:24:45.794734  145142 main.go:141] libmachine: (ha-925161-m03)   <features>
	I0719 04:24:45.794743  145142 main.go:141] libmachine: (ha-925161-m03)     <acpi/>
	I0719 04:24:45.794750  145142 main.go:141] libmachine: (ha-925161-m03)     <apic/>
	I0719 04:24:45.794756  145142 main.go:141] libmachine: (ha-925161-m03)     <pae/>
	I0719 04:24:45.794764  145142 main.go:141] libmachine: (ha-925161-m03)     
	I0719 04:24:45.794772  145142 main.go:141] libmachine: (ha-925161-m03)   </features>
	I0719 04:24:45.794784  145142 main.go:141] libmachine: (ha-925161-m03)   <cpu mode='host-passthrough'>
	I0719 04:24:45.794797  145142 main.go:141] libmachine: (ha-925161-m03)   
	I0719 04:24:45.794804  145142 main.go:141] libmachine: (ha-925161-m03)   </cpu>
	I0719 04:24:45.794826  145142 main.go:141] libmachine: (ha-925161-m03)   <os>
	I0719 04:24:45.794846  145142 main.go:141] libmachine: (ha-925161-m03)     <type>hvm</type>
	I0719 04:24:45.794856  145142 main.go:141] libmachine: (ha-925161-m03)     <boot dev='cdrom'/>
	I0719 04:24:45.794866  145142 main.go:141] libmachine: (ha-925161-m03)     <boot dev='hd'/>
	I0719 04:24:45.794876  145142 main.go:141] libmachine: (ha-925161-m03)     <bootmenu enable='no'/>
	I0719 04:24:45.794885  145142 main.go:141] libmachine: (ha-925161-m03)   </os>
	I0719 04:24:45.794893  145142 main.go:141] libmachine: (ha-925161-m03)   <devices>
	I0719 04:24:45.794904  145142 main.go:141] libmachine: (ha-925161-m03)     <disk type='file' device='cdrom'>
	I0719 04:24:45.794925  145142 main.go:141] libmachine: (ha-925161-m03)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/boot2docker.iso'/>
	I0719 04:24:45.794942  145142 main.go:141] libmachine: (ha-925161-m03)       <target dev='hdc' bus='scsi'/>
	I0719 04:24:45.794949  145142 main.go:141] libmachine: (ha-925161-m03)       <readonly/>
	I0719 04:24:45.794954  145142 main.go:141] libmachine: (ha-925161-m03)     </disk>
	I0719 04:24:45.794960  145142 main.go:141] libmachine: (ha-925161-m03)     <disk type='file' device='disk'>
	I0719 04:24:45.794969  145142 main.go:141] libmachine: (ha-925161-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 04:24:45.794981  145142 main.go:141] libmachine: (ha-925161-m03)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/ha-925161-m03.rawdisk'/>
	I0719 04:24:45.794987  145142 main.go:141] libmachine: (ha-925161-m03)       <target dev='hda' bus='virtio'/>
	I0719 04:24:45.794992  145142 main.go:141] libmachine: (ha-925161-m03)     </disk>
	I0719 04:24:45.795001  145142 main.go:141] libmachine: (ha-925161-m03)     <interface type='network'>
	I0719 04:24:45.795007  145142 main.go:141] libmachine: (ha-925161-m03)       <source network='mk-ha-925161'/>
	I0719 04:24:45.795013  145142 main.go:141] libmachine: (ha-925161-m03)       <model type='virtio'/>
	I0719 04:24:45.795024  145142 main.go:141] libmachine: (ha-925161-m03)     </interface>
	I0719 04:24:45.795035  145142 main.go:141] libmachine: (ha-925161-m03)     <interface type='network'>
	I0719 04:24:45.795049  145142 main.go:141] libmachine: (ha-925161-m03)       <source network='default'/>
	I0719 04:24:45.795060  145142 main.go:141] libmachine: (ha-925161-m03)       <model type='virtio'/>
	I0719 04:24:45.795069  145142 main.go:141] libmachine: (ha-925161-m03)     </interface>
	I0719 04:24:45.795081  145142 main.go:141] libmachine: (ha-925161-m03)     <serial type='pty'>
	I0719 04:24:45.795090  145142 main.go:141] libmachine: (ha-925161-m03)       <target port='0'/>
	I0719 04:24:45.795100  145142 main.go:141] libmachine: (ha-925161-m03)     </serial>
	I0719 04:24:45.795120  145142 main.go:141] libmachine: (ha-925161-m03)     <console type='pty'>
	I0719 04:24:45.795133  145142 main.go:141] libmachine: (ha-925161-m03)       <target type='serial' port='0'/>
	I0719 04:24:45.795144  145142 main.go:141] libmachine: (ha-925161-m03)     </console>
	I0719 04:24:45.795158  145142 main.go:141] libmachine: (ha-925161-m03)     <rng model='virtio'>
	I0719 04:24:45.795171  145142 main.go:141] libmachine: (ha-925161-m03)       <backend model='random'>/dev/random</backend>
	I0719 04:24:45.795180  145142 main.go:141] libmachine: (ha-925161-m03)     </rng>
	I0719 04:24:45.795188  145142 main.go:141] libmachine: (ha-925161-m03)     
	I0719 04:24:45.795197  145142 main.go:141] libmachine: (ha-925161-m03)     
	I0719 04:24:45.795206  145142 main.go:141] libmachine: (ha-925161-m03)   </devices>
	I0719 04:24:45.795215  145142 main.go:141] libmachine: (ha-925161-m03) </domain>
	I0719 04:24:45.795234  145142 main.go:141] libmachine: (ha-925161-m03) 
	I0719 04:24:45.802289  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:eb:36:80 in network default
	I0719 04:24:45.802865  145142 main.go:141] libmachine: (ha-925161-m03) Ensuring networks are active...
	I0719 04:24:45.802887  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:45.803742  145142 main.go:141] libmachine: (ha-925161-m03) Ensuring network default is active
	I0719 04:24:45.804122  145142 main.go:141] libmachine: (ha-925161-m03) Ensuring network mk-ha-925161 is active
	I0719 04:24:45.804522  145142 main.go:141] libmachine: (ha-925161-m03) Getting domain xml...
	I0719 04:24:45.805309  145142 main.go:141] libmachine: (ha-925161-m03) Creating domain...
	I0719 04:24:47.015997  145142 main.go:141] libmachine: (ha-925161-m03) Waiting to get IP...
	I0719 04:24:47.016773  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:47.017215  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:47.017233  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:47.017197  145993 retry.go:31] will retry after 277.025133ms: waiting for machine to come up
	I0719 04:24:47.295814  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:47.296340  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:47.296373  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:47.296303  145993 retry.go:31] will retry after 346.173005ms: waiting for machine to come up
	I0719 04:24:47.643714  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:47.644205  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:47.644232  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:47.644151  145993 retry.go:31] will retry after 354.698058ms: waiting for machine to come up
	I0719 04:24:48.000724  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:48.001183  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:48.001206  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:48.001147  145993 retry.go:31] will retry after 455.182254ms: waiting for machine to come up
	I0719 04:24:48.457709  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:48.458155  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:48.458178  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:48.458122  145993 retry.go:31] will retry after 521.468381ms: waiting for machine to come up
	I0719 04:24:48.981537  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:48.981867  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:48.981921  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:48.981819  145993 retry.go:31] will retry after 619.202661ms: waiting for machine to come up
	I0719 04:24:49.602142  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:49.602622  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:49.602647  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:49.602581  145993 retry.go:31] will retry after 1.090091658s: waiting for machine to come up
	I0719 04:24:50.694118  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:50.694561  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:50.694596  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:50.694532  145993 retry.go:31] will retry after 1.444482953s: waiting for machine to come up
	I0719 04:24:52.140189  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:52.140684  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:52.140716  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:52.140619  145993 retry.go:31] will retry after 1.264022258s: waiting for machine to come up
	I0719 04:24:53.406252  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:53.406758  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:53.406781  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:53.406722  145993 retry.go:31] will retry after 1.423444201s: waiting for machine to come up
	I0719 04:24:54.831522  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:54.832037  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:54.832062  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:54.831981  145993 retry.go:31] will retry after 2.511156737s: waiting for machine to come up
	I0719 04:24:57.344288  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:24:57.344562  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:24:57.344591  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:24:57.344511  145993 retry.go:31] will retry after 3.426540062s: waiting for machine to come up
	I0719 04:25:00.773262  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:00.773769  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find current IP address of domain ha-925161-m03 in network mk-ha-925161
	I0719 04:25:00.773799  145142 main.go:141] libmachine: (ha-925161-m03) DBG | I0719 04:25:00.773727  145993 retry.go:31] will retry after 4.350683357s: waiting for machine to come up
	I0719 04:25:05.126142  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:05.126708  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has current primary IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:05.126726  145142 main.go:141] libmachine: (ha-925161-m03) Found IP for machine: 192.168.39.190
	I0719 04:25:05.126739  145142 main.go:141] libmachine: (ha-925161-m03) Reserving static IP address...
	I0719 04:25:05.127121  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find host DHCP lease matching {name: "ha-925161-m03", mac: "52:54:00:7e:5f:eb", ip: "192.168.39.190"} in network mk-ha-925161
	I0719 04:25:05.201307  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Getting to WaitForSSH function...
	I0719 04:25:05.201347  145142 main.go:141] libmachine: (ha-925161-m03) Reserved static IP address: 192.168.39.190
	I0719 04:25:05.201361  145142 main.go:141] libmachine: (ha-925161-m03) Waiting for SSH to be available...
	I0719 04:25:05.203824  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:05.204186  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161
	I0719 04:25:05.204212  145142 main.go:141] libmachine: (ha-925161-m03) DBG | unable to find defined IP address of network mk-ha-925161 interface with MAC address 52:54:00:7e:5f:eb
	I0719 04:25:05.204403  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using SSH client type: external
	I0719 04:25:05.204429  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa (-rw-------)
	I0719 04:25:05.204464  145142 main.go:141] libmachine: (ha-925161-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 04:25:05.204479  145142 main.go:141] libmachine: (ha-925161-m03) DBG | About to run SSH command:
	I0719 04:25:05.204509  145142 main.go:141] libmachine: (ha-925161-m03) DBG | exit 0
	I0719 04:25:05.208140  145142 main.go:141] libmachine: (ha-925161-m03) DBG | SSH cmd err, output: exit status 255: 
	I0719 04:25:05.208162  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0719 04:25:05.208169  145142 main.go:141] libmachine: (ha-925161-m03) DBG | command : exit 0
	I0719 04:25:05.208175  145142 main.go:141] libmachine: (ha-925161-m03) DBG | err     : exit status 255
	I0719 04:25:05.208213  145142 main.go:141] libmachine: (ha-925161-m03) DBG | output  : 
	I0719 04:25:08.210191  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Getting to WaitForSSH function...
	I0719 04:25:08.212633  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.213024  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.213060  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.213147  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using SSH client type: external
	I0719 04:25:08.213184  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa (-rw-------)
	I0719 04:25:08.213215  145142 main.go:141] libmachine: (ha-925161-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 04:25:08.213228  145142 main.go:141] libmachine: (ha-925161-m03) DBG | About to run SSH command:
	I0719 04:25:08.213255  145142 main.go:141] libmachine: (ha-925161-m03) DBG | exit 0
	I0719 04:25:08.336885  145142 main.go:141] libmachine: (ha-925161-m03) DBG | SSH cmd err, output: <nil>: 
	I0719 04:25:08.337176  145142 main.go:141] libmachine: (ha-925161-m03) KVM machine creation complete!
	I0719 04:25:08.337537  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetConfigRaw
	I0719 04:25:08.338098  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:08.338325  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:08.338498  145142 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 04:25:08.338516  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:25:08.339906  145142 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 04:25:08.339923  145142 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 04:25:08.339931  145142 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 04:25:08.339941  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.342374  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.342802  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.342832  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.343011  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.343210  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.343453  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.343660  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.343828  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:08.344130  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:08.344148  145142 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 04:25:08.444238  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:25:08.444261  145142 main.go:141] libmachine: Detecting the provisioner...
	I0719 04:25:08.444270  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.447342  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.447711  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.447737  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.447949  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.448156  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.448292  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.448399  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.448600  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:08.448806  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:08.448822  145142 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 04:25:08.549808  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 04:25:08.549874  145142 main.go:141] libmachine: found compatible host: buildroot
	I0719 04:25:08.549885  145142 main.go:141] libmachine: Provisioning with buildroot...
	I0719 04:25:08.549906  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetMachineName
	I0719 04:25:08.550207  145142 buildroot.go:166] provisioning hostname "ha-925161-m03"
	I0719 04:25:08.550237  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetMachineName
	I0719 04:25:08.550439  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.552967  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.553374  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.553395  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.553561  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.553730  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.553856  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.554001  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.554204  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:08.554363  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:08.554378  145142 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-925161-m03 && echo "ha-925161-m03" | sudo tee /etc/hostname
	I0719 04:25:08.670792  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161-m03
	
	I0719 04:25:08.670838  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.673865  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.674347  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.674378  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.674677  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.674938  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.675116  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.675268  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.675418  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:08.675614  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:08.675633  145142 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-925161-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-925161-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-925161-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:25:08.785771  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:25:08.785805  145142 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:25:08.785829  145142 buildroot.go:174] setting up certificates
	I0719 04:25:08.785843  145142 provision.go:84] configureAuth start
	I0719 04:25:08.785859  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetMachineName
	I0719 04:25:08.786159  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:25:08.788778  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.789202  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.789238  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.789471  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.791902  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.792363  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.792394  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.792516  145142 provision.go:143] copyHostCerts
	I0719 04:25:08.792550  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:25:08.792587  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:25:08.792598  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:25:08.792677  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:25:08.792774  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:25:08.792799  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:25:08.792809  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:25:08.792845  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:25:08.792906  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:25:08.792929  145142 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:25:08.792937  145142 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:25:08.792973  145142 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:25:08.793041  145142 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.ha-925161-m03 san=[127.0.0.1 192.168.39.190 ha-925161-m03 localhost minikube]
	I0719 04:25:08.931698  145142 provision.go:177] copyRemoteCerts
	I0719 04:25:08.931756  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:25:08.931784  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:08.934674  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.935001  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:08.935023  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:08.935337  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:08.935539  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:08.935681  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:08.935811  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:25:09.014813  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:25:09.014894  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:25:09.037362  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:25:09.037428  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 04:25:09.059453  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:25:09.059533  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:25:09.081377  145142 provision.go:87] duration metric: took 295.517176ms to configureAuth
	I0719 04:25:09.081407  145142 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:25:09.081666  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:25:09.081764  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:09.084474  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.084903  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.084926  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.085173  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.085391  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.085588  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.085734  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.085868  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:09.086048  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:09.086067  145142 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:25:09.337632  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:25:09.337662  145142 main.go:141] libmachine: Checking connection to Docker...
	I0719 04:25:09.337673  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetURL
	I0719 04:25:09.339132  145142 main.go:141] libmachine: (ha-925161-m03) DBG | Using libvirt version 6000000
	I0719 04:25:09.341688  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.342084  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.342115  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.342281  145142 main.go:141] libmachine: Docker is up and running!
	I0719 04:25:09.342298  145142 main.go:141] libmachine: Reticulating splines...
	I0719 04:25:09.342305  145142 client.go:171] duration metric: took 23.828219304s to LocalClient.Create
	I0719 04:25:09.342330  145142 start.go:167] duration metric: took 23.828288361s to libmachine.API.Create "ha-925161"
	I0719 04:25:09.342343  145142 start.go:293] postStartSetup for "ha-925161-m03" (driver="kvm2")
	I0719 04:25:09.342474  145142 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:25:09.342510  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.342779  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:25:09.342803  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:09.345496  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.345835  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.345859  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.346014  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.346226  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.346405  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.346563  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:25:09.427161  145142 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:25:09.431042  145142 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:25:09.431066  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:25:09.431133  145142 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:25:09.431203  145142 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:25:09.431216  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:25:09.431329  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:25:09.439889  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:25:09.461424  145142 start.go:296] duration metric: took 118.951136ms for postStartSetup
	I0719 04:25:09.461486  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetConfigRaw
	I0719 04:25:09.462127  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:25:09.464905  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.465308  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.465331  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.465615  145142 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:25:09.465801  145142 start.go:128] duration metric: took 23.970556216s to createHost
	I0719 04:25:09.465825  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:09.468059  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.468371  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.468397  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.468510  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.468685  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.468857  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.469033  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.469239  145142 main.go:141] libmachine: Using SSH client type: native
	I0719 04:25:09.469429  145142 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0719 04:25:09.469440  145142 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:25:09.570447  145142 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363109.550120349
	
	I0719 04:25:09.570473  145142 fix.go:216] guest clock: 1721363109.550120349
	I0719 04:25:09.570483  145142 fix.go:229] Guest: 2024-07-19 04:25:09.550120349 +0000 UTC Remote: 2024-07-19 04:25:09.465813538 +0000 UTC m=+159.718937610 (delta=84.306811ms)
	I0719 04:25:09.570503  145142 fix.go:200] guest clock delta is within tolerance: 84.306811ms
	I0719 04:25:09.570510  145142 start.go:83] releasing machines lock for "ha-925161-m03", held for 24.075380293s
	I0719 04:25:09.570534  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.570805  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:25:09.573667  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.574164  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.574203  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.576636  145142 out.go:177] * Found network options:
	I0719 04:25:09.578072  145142 out.go:177]   - NO_PROXY=192.168.39.246,192.168.39.102
	W0719 04:25:09.579382  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:25:09.579416  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:25:09.579434  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.580084  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.580346  145142 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:25:09.580456  145142 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:25:09.580496  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	W0719 04:25:09.580557  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:25:09.580586  145142 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:25:09.580655  145142 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:25:09.580678  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:25:09.583380  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.583405  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.583788  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.583813  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.583972  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:09.583996  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.583999  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:09.584193  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.584242  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:25:09.584387  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.584407  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:25:09.584637  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:25:09.584667  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:25:09.584813  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:25:09.816201  145142 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:25:09.822223  145142 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:25:09.822314  145142 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:25:09.837919  145142 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:25:09.837944  145142 start.go:495] detecting cgroup driver to use...
	I0719 04:25:09.838012  145142 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:25:09.854894  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:25:09.868083  145142 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:25:09.868143  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:25:09.881305  145142 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:25:09.894290  145142 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:25:10.008511  145142 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:25:10.148950  145142 docker.go:233] disabling docker service ...
	I0719 04:25:10.149020  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:25:10.163566  145142 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:25:10.178022  145142 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:25:10.334596  145142 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:25:10.465736  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:25:10.478989  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:25:10.497102  145142 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:25:10.497178  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.507362  145142 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:25:10.507440  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.517566  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.527265  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.536829  145142 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:25:10.546961  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.556566  145142 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.572316  145142 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:25:10.582162  145142 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:25:10.591369  145142 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 04:25:10.591430  145142 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 04:25:10.604198  145142 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:25:10.613207  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:25:10.734874  145142 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:25:10.870466  145142 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:25:10.870545  145142 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:25:10.875402  145142 start.go:563] Will wait 60s for crictl version
	I0719 04:25:10.875469  145142 ssh_runner.go:195] Run: which crictl
	I0719 04:25:10.879049  145142 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:25:10.921854  145142 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:25:10.921933  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:25:10.949193  145142 ssh_runner.go:195] Run: crio --version
	I0719 04:25:10.977659  145142 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:25:10.979121  145142 out.go:177]   - env NO_PROXY=192.168.39.246
	I0719 04:25:10.980765  145142 out.go:177]   - env NO_PROXY=192.168.39.246,192.168.39.102
	I0719 04:25:10.982367  145142 main.go:141] libmachine: (ha-925161-m03) Calling .GetIP
	I0719 04:25:10.985396  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:10.985955  145142 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:25:10.985981  145142 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:25:10.986209  145142 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:25:10.990177  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:25:11.001885  145142 mustload.go:65] Loading cluster: ha-925161
	I0719 04:25:11.002121  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:25:11.002450  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:25:11.002501  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:25:11.018736  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42461
	I0719 04:25:11.019224  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:25:11.019696  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:25:11.019720  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:25:11.020042  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:25:11.020260  145142 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:25:11.021841  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:25:11.022135  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:25:11.022170  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:25:11.037341  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I0719 04:25:11.037778  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:25:11.038254  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:25:11.038290  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:25:11.038574  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:25:11.038765  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:25:11.038954  145142 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161 for IP: 192.168.39.190
	I0719 04:25:11.038968  145142 certs.go:194] generating shared ca certs ...
	I0719 04:25:11.038987  145142 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:25:11.039124  145142 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:25:11.039188  145142 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:25:11.039202  145142 certs.go:256] generating profile certs ...
	I0719 04:25:11.039295  145142 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key
	I0719 04:25:11.039328  145142 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.84697c77
	I0719 04:25:11.039355  145142 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.84697c77 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.102 192.168.39.190 192.168.39.254]
	I0719 04:25:11.567437  145142 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.84697c77 ...
	I0719 04:25:11.567471  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.84697c77: {Name:mk373f1857bc49369966cfa39fe8c1a2e380ab66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:25:11.567658  145142 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.84697c77 ...
	I0719 04:25:11.567672  145142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.84697c77: {Name:mkd1589f36926e43cc9ee20b274551dfc36ba7a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:25:11.567745  145142 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.84697c77 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt
	I0719 04:25:11.567865  145142 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.84697c77 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key
	I0719 04:25:11.567989  145142 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key
	I0719 04:25:11.568005  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:25:11.568017  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:25:11.568030  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:25:11.568043  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:25:11.568055  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:25:11.568071  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:25:11.568083  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:25:11.568095  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:25:11.568144  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:25:11.568172  145142 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:25:11.568181  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:25:11.568204  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:25:11.568227  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:25:11.568247  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:25:11.568281  145142 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:25:11.568351  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:25:11.568372  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:25:11.568384  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:25:11.568417  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:25:11.571552  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:25:11.571928  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:25:11.571964  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:25:11.572198  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:25:11.572464  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:25:11.572632  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:25:11.572782  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:25:11.645507  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 04:25:11.650229  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 04:25:11.661243  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 04:25:11.665650  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0719 04:25:11.681346  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 04:25:11.687467  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 04:25:11.698118  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 04:25:11.701925  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0719 04:25:11.712824  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 04:25:11.716812  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 04:25:11.726777  145142 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 04:25:11.731335  145142 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0719 04:25:11.741502  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:25:11.765620  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:25:11.789211  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:25:11.813083  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:25:11.838453  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0719 04:25:11.863963  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:25:11.888495  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:25:11.912939  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:25:11.935621  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:25:11.957513  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:25:11.980784  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:25:12.004296  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 04:25:12.020460  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0719 04:25:12.036721  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 04:25:12.052426  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0719 04:25:12.067790  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 04:25:12.084563  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0719 04:25:12.101359  145142 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 04:25:12.117840  145142 ssh_runner.go:195] Run: openssl version
	I0719 04:25:12.123111  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:25:12.132876  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:25:12.136942  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:25:12.137008  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:25:12.142543  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:25:12.152054  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:25:12.161572  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:25:12.165628  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:25:12.165674  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:25:12.171083  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:25:12.182216  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:25:12.192475  145142 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:25:12.196619  145142 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:25:12.196682  145142 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:25:12.201974  145142 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:25:12.212165  145142 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:25:12.215954  145142 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:25:12.216017  145142 kubeadm.go:934] updating node {m03 192.168.39.190 8443 v1.30.3 crio true true} ...
	I0719 04:25:12.216173  145142 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-925161-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:25:12.216208  145142 kube-vip.go:115] generating kube-vip config ...
	I0719 04:25:12.216249  145142 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:25:12.232292  145142 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:25:12.232359  145142 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:25:12.232410  145142 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:25:12.241087  145142 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 04:25:12.241153  145142 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 04:25:12.249989  145142 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0719 04:25:12.250028  145142 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 04:25:12.249989  145142 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0719 04:25:12.250039  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:25:12.250048  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:25:12.250052  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:25:12.250134  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:25:12.250134  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:25:12.254062  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 04:25:12.254092  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 04:25:12.273890  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 04:25:12.273940  145142 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:25:12.273942  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 04:25:12.274129  145142 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:25:12.329661  145142 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 04:25:12.329720  145142 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 04:25:13.105116  145142 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 04:25:13.114745  145142 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 04:25:13.130878  145142 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:25:13.146901  145142 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:25:13.163498  145142 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:25:13.167247  145142 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:25:13.180301  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:25:13.333576  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:25:13.350966  145142 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:25:13.351327  145142 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:25:13.351368  145142 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:25:13.366893  145142 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39289
	I0719 04:25:13.367315  145142 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:25:13.367879  145142 main.go:141] libmachine: Using API Version  1
	I0719 04:25:13.367905  145142 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:25:13.368277  145142 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:25:13.368500  145142 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:25:13.368660  145142 start.go:317] joinCluster: &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:25:13.368829  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 04:25:13.368850  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:25:13.371895  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:25:13.372431  145142 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:25:13.372461  145142 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:25:13.372623  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:25:13.372827  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:25:13.372983  145142 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:25:13.373168  145142 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:25:13.533338  145142 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:25:13.533397  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0kajtd.tjg4friexfw44gr8 --discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-925161-m03 --control-plane --apiserver-advertise-address=192.168.39.190 --apiserver-bind-port=8443"
	I0719 04:25:37.396567  145142 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0kajtd.tjg4friexfw44gr8 --discovery-token-ca-cert-hash sha256:1b8c9b438cd382daae07d0c80077e3e844c6e3a56a419c26c4cfa86e5846b833 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-925161-m03 --control-plane --apiserver-advertise-address=192.168.39.190 --apiserver-bind-port=8443": (23.863139662s)
	I0719 04:25:37.396608  145142 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 04:25:38.006840  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-925161-m03 minikube.k8s.io/updated_at=2024_07_19T04_25_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-925161 minikube.k8s.io/primary=false
	I0719 04:25:38.124813  145142 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-925161-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 04:25:38.236587  145142 start.go:319] duration metric: took 24.867922687s to joinCluster
	I0719 04:25:38.236685  145142 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 04:25:38.237022  145142 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:25:38.238244  145142 out.go:177] * Verifying Kubernetes components...
	I0719 04:25:38.239737  145142 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:25:38.483563  145142 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:25:38.548096  145142 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:25:38.548374  145142 kapi.go:59] client config for ha-925161: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key", CAFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 04:25:38.548437  145142 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.246:8443
	I0719 04:25:38.548683  145142 node_ready.go:35] waiting up to 6m0s for node "ha-925161-m03" to be "Ready" ...
	I0719 04:25:38.548763  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:38.548774  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:38.548785  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:38.548793  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:38.552631  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:39.049410  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:39.049435  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:39.049444  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:39.049450  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:39.053503  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:25:39.549845  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:39.549874  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:39.549885  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:39.549891  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:39.553566  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:40.049392  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:40.049418  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:40.049434  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:40.049438  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:40.052716  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:40.549235  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:40.549259  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:40.549270  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:40.549277  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:40.553259  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:40.553997  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:41.049228  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:41.049249  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:41.049261  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:41.049266  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:41.053031  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:41.549512  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:41.549533  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:41.549541  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:41.549546  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:41.553346  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:42.049652  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:42.049694  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:42.049710  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:42.049716  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:42.052936  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:42.549384  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:42.549404  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:42.549413  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:42.549418  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:42.554109  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:25:42.555084  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:43.049381  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:43.049407  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:43.049418  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:43.049426  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:43.052749  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:43.549940  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:43.549962  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:43.549970  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:43.549973  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:43.553484  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:44.049655  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:44.049689  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:44.049710  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:44.049717  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:44.052716  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:44.549744  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:44.549769  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:44.549779  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:44.549785  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:44.553660  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:45.048924  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:45.048948  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:45.048956  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:45.048960  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:45.052171  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:45.052951  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:45.549607  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:45.549632  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:45.549645  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:45.549651  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:45.553046  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:46.048833  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:46.048855  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:46.048863  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:46.048868  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:46.052096  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:46.549440  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:46.549464  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:46.549476  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:46.549482  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:46.552366  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:47.049236  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:47.049262  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:47.049275  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:47.049280  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:47.053113  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:47.053626  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:47.549474  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:47.549550  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:47.549566  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:47.549572  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:47.553971  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:25:48.048975  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:48.048998  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:48.049006  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:48.049010  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:48.052841  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:48.548896  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:48.548918  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:48.548926  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:48.548930  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:48.552539  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:49.049486  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:49.049507  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:49.049515  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:49.049519  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:49.052729  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:49.549738  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:49.549764  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:49.549776  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:49.549782  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:49.553116  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:49.553814  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:50.049901  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:50.049932  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:50.049944  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:50.049952  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:50.053305  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:50.549885  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:50.549908  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:50.549918  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:50.549923  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:50.553396  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:51.049280  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:51.049298  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:51.049310  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:51.049321  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:51.052449  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:51.549329  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:51.549354  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:51.549365  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:51.549370  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:51.552531  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:52.049876  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:52.049902  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:52.049914  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:52.049919  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:52.052842  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:52.053631  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:52.549220  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:52.549241  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:52.549250  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:52.549254  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:52.552348  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:53.049767  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:53.049790  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:53.049800  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:53.049804  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:53.053107  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:53.549332  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:53.549358  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:53.549369  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:53.549374  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:53.552631  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:54.049552  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:54.049574  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:54.049582  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:54.049586  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:54.052677  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:54.549757  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:54.549781  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:54.549792  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:54.549800  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:54.553100  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:54.553667  145142 node_ready.go:53] node "ha-925161-m03" has status "Ready":"False"
	I0719 04:25:55.049799  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:55.049828  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:55.049839  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:55.049846  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:55.053891  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:25:55.549226  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:55.549244  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:55.549252  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:55.549256  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:55.552834  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.049339  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:56.049362  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.049374  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.049380  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.052933  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.053768  145142 node_ready.go:49] node "ha-925161-m03" has status "Ready":"True"
	I0719 04:25:56.053791  145142 node_ready.go:38] duration metric: took 17.505093181s for node "ha-925161-m03" to be "Ready" ...
	I0719 04:25:56.053801  145142 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:25:56.053873  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:25:56.053884  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.053891  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.053898  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.060659  145142 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:25:56.067354  145142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.067437  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-7wzcg
	I0719 04:25:56.067445  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.067452  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.067456  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.071268  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.072407  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.072420  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.072428  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.072432  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.075974  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.076571  145142 pod_ready.go:92] pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.076612  145142 pod_ready.go:81] duration metric: took 9.232088ms for pod "coredns-7db6d8ff4d-7wzcg" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.076625  145142 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.076695  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hwdsq
	I0719 04:25:56.076707  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.076716  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.076722  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.079529  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:56.080117  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.080129  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.080136  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.080140  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.083662  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.084597  145142 pod_ready.go:92] pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.084614  145142 pod_ready.go:81] duration metric: took 7.983149ms for pod "coredns-7db6d8ff4d-hwdsq" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.084623  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.084676  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161
	I0719 04:25:56.084686  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.084703  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.084711  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.087849  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.088515  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.088531  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.088538  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.088542  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.092101  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.092552  145142 pod_ready.go:92] pod "etcd-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.092568  145142 pod_ready.go:81] duration metric: took 7.940039ms for pod "etcd-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.092576  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.092638  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161-m02
	I0719 04:25:56.092649  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.092658  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.092663  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.100570  145142 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:25:56.101216  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:56.101230  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.101237  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.101241  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.103631  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:56.104014  145142 pod_ready.go:92] pod "etcd-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.104030  145142 pod_ready.go:81] duration metric: took 11.448439ms for pod "etcd-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.104040  145142 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.249352  145142 request.go:629] Waited for 145.229729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161-m03
	I0719 04:25:56.249425  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161-m03
	I0719 04:25:56.249430  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.249437  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.249443  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.252774  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.449798  145142 request.go:629] Waited for 196.362556ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:56.449867  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:56.449874  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.449885  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.449892  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.453499  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.453990  145142 pod_ready.go:92] pod "etcd-ha-925161-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.454014  145142 pod_ready.go:81] duration metric: took 349.966859ms for pod "etcd-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.454038  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.650141  145142 request.go:629] Waited for 196.006293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161
	I0719 04:25:56.650212  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161
	I0719 04:25:56.650221  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.650232  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.650245  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.653688  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.849639  145142 request.go:629] Waited for 195.358648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.849732  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:56.849741  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:56.849750  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:56.849756  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:56.852822  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:56.853646  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:56.853674  145142 pod_ready.go:81] duration metric: took 399.623518ms for pod "kube-apiserver-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:56.853688  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.049570  145142 request.go:629] Waited for 195.803774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m02
	I0719 04:25:57.049672  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m02
	I0719 04:25:57.049684  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.049696  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.049707  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.053372  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:57.250260  145142 request.go:629] Waited for 196.267735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:57.250336  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:57.250348  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.250359  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.250369  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.253523  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:57.253994  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:57.254013  145142 pod_ready.go:81] duration metric: took 400.316599ms for pod "kube-apiserver-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.254025  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.449485  145142 request.go:629] Waited for 195.37046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m03
	I0719 04:25:57.449558  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161-m03
	I0719 04:25:57.449570  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.449580  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.449589  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.453549  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:57.649581  145142 request.go:629] Waited for 195.278712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:57.649652  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:57.649660  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.649670  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.649674  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.652290  145142 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:25:57.652835  145142 pod_ready.go:92] pod "kube-apiserver-ha-925161-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:57.652857  145142 pod_ready.go:81] duration metric: took 398.823668ms for pod "kube-apiserver-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.652869  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:57.849748  145142 request.go:629] Waited for 196.791111ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161
	I0719 04:25:57.849824  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161
	I0719 04:25:57.849829  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:57.849835  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:57.849840  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:57.853222  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.050339  145142 request.go:629] Waited for 196.349823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:58.050422  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:58.050430  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.050437  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.050443  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.053777  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.054507  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:58.054526  145142 pod_ready.go:81] duration metric: took 401.64792ms for pod "kube-controller-manager-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.054538  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.249660  145142 request.go:629] Waited for 195.049698ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m02
	I0719 04:25:58.249723  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m02
	I0719 04:25:58.249729  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.249737  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.249740  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.252894  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.450122  145142 request.go:629] Waited for 196.378279ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:58.450213  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:25:58.450224  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.450242  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.450253  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.454020  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.454596  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:58.454615  145142 pod_ready.go:81] duration metric: took 400.070348ms for pod "kube-controller-manager-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.454625  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.649796  145142 request.go:629] Waited for 195.085408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m03
	I0719 04:25:58.649856  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161-m03
	I0719 04:25:58.649862  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.649870  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.649874  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.653446  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.850187  145142 request.go:629] Waited for 195.248482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:58.850262  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:58.850273  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:58.850283  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:58.850291  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:58.853704  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:58.854276  145142 pod_ready.go:92] pod "kube-controller-manager-ha-925161-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:58.854293  145142 pod_ready.go:81] duration metric: took 399.662625ms for pod "kube-controller-manager-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:58.854303  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8dbqt" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.049918  145142 request.go:629] Waited for 195.537406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dbqt
	I0719 04:25:59.050021  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dbqt
	I0719 04:25:59.050033  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.050041  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.050047  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.053229  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:59.249473  145142 request.go:629] Waited for 195.302433ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:59.249544  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:25:59.249551  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.249561  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.249569  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.252622  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:59.253355  145142 pod_ready.go:92] pod "kube-proxy-8dbqt" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:59.253377  145142 pod_ready.go:81] duration metric: took 399.064103ms for pod "kube-proxy-8dbqt" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.253390  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-j6526" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.450380  145142 request.go:629] Waited for 196.900848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6526
	I0719 04:25:59.450449  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-j6526
	I0719 04:25:59.450455  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.450462  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.450466  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.453905  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:59.650183  145142 request.go:629] Waited for 195.38685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:59.650242  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:25:59.650248  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.650258  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.650264  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.653782  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:25:59.654347  145142 pod_ready.go:92] pod "kube-proxy-j6526" in "kube-system" namespace has status "Ready":"True"
	I0719 04:25:59.654365  145142 pod_ready.go:81] duration metric: took 400.967227ms for pod "kube-proxy-j6526" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.654382  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s6df4" in "kube-system" namespace to be "Ready" ...
	I0719 04:25:59.849901  145142 request.go:629] Waited for 195.426207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6df4
	I0719 04:25:59.849976  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s6df4
	I0719 04:25:59.849987  145142 round_trippers.go:469] Request Headers:
	I0719 04:25:59.850001  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:25:59.850008  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:25:59.853528  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.049577  145142 request.go:629] Waited for 195.405633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:26:00.049648  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:26:00.049654  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.049662  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.049669  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.052959  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.053718  145142 pod_ready.go:92] pod "kube-proxy-s6df4" in "kube-system" namespace has status "Ready":"True"
	I0719 04:26:00.053739  145142 pod_ready.go:81] duration metric: took 399.346448ms for pod "kube-proxy-s6df4" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.053751  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.249858  145142 request.go:629] Waited for 196.008753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161
	I0719 04:26:00.249916  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161
	I0719 04:26:00.249921  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.249928  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.249932  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.253095  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.450275  145142 request.go:629] Waited for 196.238184ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:26:00.450340  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161
	I0719 04:26:00.450348  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.450356  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.450360  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.453607  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.454212  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161" in "kube-system" namespace has status "Ready":"True"
	I0719 04:26:00.454229  145142 pod_ready.go:81] duration metric: took 400.471839ms for pod "kube-scheduler-ha-925161" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.454239  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.649891  145142 request.go:629] Waited for 195.574792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m02
	I0719 04:26:00.649989  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m02
	I0719 04:26:00.649998  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.650010  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.650017  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.653707  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:00.849941  145142 request.go:629] Waited for 195.367136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:26:00.849999  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m02
	I0719 04:26:00.850004  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:00.850012  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:00.850017  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:00.854122  145142 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:26:00.854897  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:26:00.854921  145142 pod_ready.go:81] duration metric: took 400.674776ms for pod "kube-scheduler-ha-925161-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:00.854936  145142 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:01.049976  145142 request.go:629] Waited for 194.971665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m03
	I0719 04:26:01.050039  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-925161-m03
	I0719 04:26:01.050045  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.050051  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.050055  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.053846  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:01.249793  145142 request.go:629] Waited for 195.310307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:26:01.249889  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes/ha-925161-m03
	I0719 04:26:01.249900  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.249912  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.249923  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.253321  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:01.253857  145142 pod_ready.go:92] pod "kube-scheduler-ha-925161-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:26:01.253875  145142 pod_ready.go:81] duration metric: took 398.932004ms for pod "kube-scheduler-ha-925161-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:26:01.253887  145142 pod_ready.go:38] duration metric: took 5.20007621s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:26:01.253902  145142 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:26:01.253961  145142 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:26:01.270775  145142 api_server.go:72] duration metric: took 23.034046733s to wait for apiserver process to appear ...
	I0719 04:26:01.270799  145142 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:26:01.270816  145142 api_server.go:253] Checking apiserver healthz at https://192.168.39.246:8443/healthz ...
	I0719 04:26:01.275256  145142 api_server.go:279] https://192.168.39.246:8443/healthz returned 200:
	ok
	I0719 04:26:01.275344  145142 round_trippers.go:463] GET https://192.168.39.246:8443/version
	I0719 04:26:01.275355  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.275368  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.275378  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.276552  145142 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 04:26:01.276638  145142 api_server.go:141] control plane version: v1.30.3
	I0719 04:26:01.276659  145142 api_server.go:131] duration metric: took 5.852592ms to wait for apiserver health ...
	I0719 04:26:01.276668  145142 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:26:01.450105  145142 request.go:629] Waited for 173.348425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:26:01.450177  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:26:01.450182  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.450190  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.450195  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.457087  145142 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:26:01.463884  145142 system_pods.go:59] 24 kube-system pods found
	I0719 04:26:01.463912  145142 system_pods.go:61] "coredns-7db6d8ff4d-7wzcg" [a434f69a-903d-4961-a54c-9a85cbc694b1] Running
	I0719 04:26:01.463919  145142 system_pods.go:61] "coredns-7db6d8ff4d-hwdsq" [894f9528-78da-4cae-9ec6-8e82a3e73264] Running
	I0719 04:26:01.463923  145142 system_pods.go:61] "etcd-ha-925161" [35b14af9-6e7d-4e5c-8c43-fa427109cde3] Running
	I0719 04:26:01.463926  145142 system_pods.go:61] "etcd-ha-925161-m02" [51f60536-03dc-4426-ac13-9d2ec33275f7] Running
	I0719 04:26:01.463930  145142 system_pods.go:61] "etcd-ha-925161-m03" [5d9cecc3-377d-401f-8d53-a70e7d31ccce] Running
	I0719 04:26:01.463933  145142 system_pods.go:61] "kindnet-7gvt6" [3980fcc1-695c-4b62-aab6-93872f4ddc11] Running
	I0719 04:26:01.463937  145142 system_pods.go:61] "kindnet-dkctc" [4ec93698-4a91-44fa-a37f-405bf1a5fa95] Running
	I0719 04:26:01.463940  145142 system_pods.go:61] "kindnet-fsr5f" [988e1118-927a-4468-ba25-3a78d8d06919] Running
	I0719 04:26:01.463945  145142 system_pods.go:61] "kube-apiserver-ha-925161" [1c56f8e6-beb8-4dcc-ba56-5097516043a6] Running
	I0719 04:26:01.463951  145142 system_pods.go:61] "kube-apiserver-ha-925161-m02" [ceaa5f20-d023-482a-9905-54f8bc47da20] Running
	I0719 04:26:01.463954  145142 system_pods.go:61] "kube-apiserver-ha-925161-m03" [3c4984d6-1059-4195-ac82-81a271623c04] Running
	I0719 04:26:01.463960  145142 system_pods.go:61] "kube-controller-manager-ha-925161" [337e75e4-92e9-48fd-a46a-73ce174b4995] Running
	I0719 04:26:01.463963  145142 system_pods.go:61] "kube-controller-manager-ha-925161-m02" [d2d234a3-a18f-4618-9b77-4bcf771463b8] Running
	I0719 04:26:01.463969  145142 system_pods.go:61] "kube-controller-manager-ha-925161-m03" [63e944cd-c1b1-41dc-9fd5-3ad11af12f8b] Running
	I0719 04:26:01.463971  145142 system_pods.go:61] "kube-proxy-8dbqt" [cd11aac3-62df-4603-8102-3384bcc100f1] Running
	I0719 04:26:01.463974  145142 system_pods.go:61] "kube-proxy-j6526" [20b69c28-de0f-4ed7-846c-848d9e938c46] Running
	I0719 04:26:01.463977  145142 system_pods.go:61] "kube-proxy-s6df4" [3373d2d8-4189-48a0-aefc-2ad0511b2a6b] Running
	I0719 04:26:01.463981  145142 system_pods.go:61] "kube-scheduler-ha-925161" [6c1c9f30-93c9-4def-b54e-97b8e27cd12b] Running
	I0719 04:26:01.463984  145142 system_pods.go:61] "kube-scheduler-ha-925161-m02" [60ea2e22-0456-40bc-bddd-32b6737350b3] Running
	I0719 04:26:01.463986  145142 system_pods.go:61] "kube-scheduler-ha-925161-m03" [16e97f9c-20d3-4c3a-988c-b3fce5955407] Running
	I0719 04:26:01.463990  145142 system_pods.go:61] "kube-vip-ha-925161" [8d01a874-336e-476c-b079-852250b3bbcd] Running
	I0719 04:26:01.463994  145142 system_pods.go:61] "kube-vip-ha-925161-m02" [0cb6b1ed-566b-4f64-903b-5af108816970] Running
	I0719 04:26:01.463997  145142 system_pods.go:61] "kube-vip-ha-925161-m03" [0dc7d41b-900e-4d18-9692-c363d4e46dac] Running
	I0719 04:26:01.464001  145142 system_pods.go:61] "storage-provisioner" [bf27da3d-f736-4742-9af5-2c0a024075ec] Running
	I0719 04:26:01.464006  145142 system_pods.go:74] duration metric: took 187.333411ms to wait for pod list to return data ...
	I0719 04:26:01.464021  145142 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:26:01.649422  145142 request.go:629] Waited for 185.324586ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:26:01.649484  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:26:01.649490  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.649500  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.649511  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.652810  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:01.652963  145142 default_sa.go:45] found service account: "default"
	I0719 04:26:01.652982  145142 default_sa.go:55] duration metric: took 188.951369ms for default service account to be created ...
	I0719 04:26:01.652996  145142 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:26:01.850280  145142 request.go:629] Waited for 197.193378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:26:01.850361  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/namespaces/kube-system/pods
	I0719 04:26:01.850374  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:01.850385  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:01.850391  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:01.884097  145142 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0719 04:26:01.890498  145142 system_pods.go:86] 24 kube-system pods found
	I0719 04:26:01.890529  145142 system_pods.go:89] "coredns-7db6d8ff4d-7wzcg" [a434f69a-903d-4961-a54c-9a85cbc694b1] Running
	I0719 04:26:01.890536  145142 system_pods.go:89] "coredns-7db6d8ff4d-hwdsq" [894f9528-78da-4cae-9ec6-8e82a3e73264] Running
	I0719 04:26:01.890543  145142 system_pods.go:89] "etcd-ha-925161" [35b14af9-6e7d-4e5c-8c43-fa427109cde3] Running
	I0719 04:26:01.890548  145142 system_pods.go:89] "etcd-ha-925161-m02" [51f60536-03dc-4426-ac13-9d2ec33275f7] Running
	I0719 04:26:01.890555  145142 system_pods.go:89] "etcd-ha-925161-m03" [5d9cecc3-377d-401f-8d53-a70e7d31ccce] Running
	I0719 04:26:01.890561  145142 system_pods.go:89] "kindnet-7gvt6" [3980fcc1-695c-4b62-aab6-93872f4ddc11] Running
	I0719 04:26:01.890566  145142 system_pods.go:89] "kindnet-dkctc" [4ec93698-4a91-44fa-a37f-405bf1a5fa95] Running
	I0719 04:26:01.890572  145142 system_pods.go:89] "kindnet-fsr5f" [988e1118-927a-4468-ba25-3a78d8d06919] Running
	I0719 04:26:01.890577  145142 system_pods.go:89] "kube-apiserver-ha-925161" [1c56f8e6-beb8-4dcc-ba56-5097516043a6] Running
	I0719 04:26:01.890584  145142 system_pods.go:89] "kube-apiserver-ha-925161-m02" [ceaa5f20-d023-482a-9905-54f8bc47da20] Running
	I0719 04:26:01.890590  145142 system_pods.go:89] "kube-apiserver-ha-925161-m03" [3c4984d6-1059-4195-ac82-81a271623c04] Running
	I0719 04:26:01.890597  145142 system_pods.go:89] "kube-controller-manager-ha-925161" [337e75e4-92e9-48fd-a46a-73ce174b4995] Running
	I0719 04:26:01.890607  145142 system_pods.go:89] "kube-controller-manager-ha-925161-m02" [d2d234a3-a18f-4618-9b77-4bcf771463b8] Running
	I0719 04:26:01.890613  145142 system_pods.go:89] "kube-controller-manager-ha-925161-m03" [63e944cd-c1b1-41dc-9fd5-3ad11af12f8b] Running
	I0719 04:26:01.890620  145142 system_pods.go:89] "kube-proxy-8dbqt" [cd11aac3-62df-4603-8102-3384bcc100f1] Running
	I0719 04:26:01.890629  145142 system_pods.go:89] "kube-proxy-j6526" [20b69c28-de0f-4ed7-846c-848d9e938c46] Running
	I0719 04:26:01.890638  145142 system_pods.go:89] "kube-proxy-s6df4" [3373d2d8-4189-48a0-aefc-2ad0511b2a6b] Running
	I0719 04:26:01.890648  145142 system_pods.go:89] "kube-scheduler-ha-925161" [6c1c9f30-93c9-4def-b54e-97b8e27cd12b] Running
	I0719 04:26:01.890654  145142 system_pods.go:89] "kube-scheduler-ha-925161-m02" [60ea2e22-0456-40bc-bddd-32b6737350b3] Running
	I0719 04:26:01.890659  145142 system_pods.go:89] "kube-scheduler-ha-925161-m03" [16e97f9c-20d3-4c3a-988c-b3fce5955407] Running
	I0719 04:26:01.890666  145142 system_pods.go:89] "kube-vip-ha-925161" [8d01a874-336e-476c-b079-852250b3bbcd] Running
	I0719 04:26:01.890670  145142 system_pods.go:89] "kube-vip-ha-925161-m02" [0cb6b1ed-566b-4f64-903b-5af108816970] Running
	I0719 04:26:01.890674  145142 system_pods.go:89] "kube-vip-ha-925161-m03" [0dc7d41b-900e-4d18-9692-c363d4e46dac] Running
	I0719 04:26:01.890680  145142 system_pods.go:89] "storage-provisioner" [bf27da3d-f736-4742-9af5-2c0a024075ec] Running
	I0719 04:26:01.890690  145142 system_pods.go:126] duration metric: took 237.684394ms to wait for k8s-apps to be running ...
	I0719 04:26:01.890700  145142 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:26:01.890747  145142 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:26:01.910434  145142 system_svc.go:56] duration metric: took 19.724775ms WaitForService to wait for kubelet
	I0719 04:26:01.910462  145142 kubeadm.go:582] duration metric: took 23.673736861s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:26:01.910482  145142 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:26:02.049873  145142 request.go:629] Waited for 139.294558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.246:8443/api/v1/nodes
	I0719 04:26:02.049930  145142 round_trippers.go:463] GET https://192.168.39.246:8443/api/v1/nodes
	I0719 04:26:02.049936  145142 round_trippers.go:469] Request Headers:
	I0719 04:26:02.049943  145142 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:26:02.049949  145142 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0719 04:26:02.053903  145142 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:26:02.055081  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:26:02.055102  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:26:02.055114  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:26:02.055117  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:26:02.055121  145142 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:26:02.055124  145142 node_conditions.go:123] node cpu capacity is 2
	I0719 04:26:02.055127  145142 node_conditions.go:105] duration metric: took 144.641214ms to run NodePressure ...
	I0719 04:26:02.055138  145142 start.go:241] waiting for startup goroutines ...
	I0719 04:26:02.055157  145142 start.go:255] writing updated cluster config ...
	I0719 04:26:02.055529  145142 ssh_runner.go:195] Run: rm -f paused
	I0719 04:26:02.109185  145142 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 04:26:02.111352  145142 out.go:177] * Done! kubectl is now configured to use "ha-925161" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 04:31:31 ha-925161 crio[682]: time="2024-07-19 04:31:31.965377455Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=126d9703-a66f-485f-abf2-dd0e0e571ac7 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:31:31 ha-925161 crio[682]: time="2024-07-19 04:31:31.966383039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75c65a42-9744-4a16-b3b0-03ebed90d21c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:31:31 ha-925161 crio[682]: time="2024-07-19 04:31:31.966797629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363491966767282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75c65a42-9744-4a16-b3b0-03ebed90d21c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:31:31 ha-925161 crio[682]: time="2024-07-19 04:31:31.967415126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df9c9620-71fc-4ab2-90cb-08a76aec00bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:31 ha-925161 crio[682]: time="2024-07-19 04:31:31.967469213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df9c9620-71fc-4ab2-90cb-08a76aec00bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:31 ha-925161 crio[682]: time="2024-07-19 04:31:31.967739527Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363166324611006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015205884262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015144650485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c755e5c5cff44f8a7c38a73192c243bbcdb84c3f5da3847d21531941a8b95d93,PodSandboxId:40cd7297d1d53fed31be961d6e39847b14d8d75a0e4eca3b0c9b05a3cec7ac54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721363015082766923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213630
03130579717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363002828843500,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412,PodSandboxId:42a74695a301994a8fe69f505b946596a45928011a694e7f458b0030c12c6c11,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721362985963244969,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eda1524f631b786182d69b02283573f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721362982966094602,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721362982930573061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012,PodSandboxId:5deb82997eca5aa2cd0fcbe3083dd4d824032623e4e1727dd40d362c5defc745,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721362982917267786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513,PodSandboxId:a1d0203f57600d7f98a4d21b8e859ad53d31a54211458e99baede150d4f27f62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721362982883247583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df9c9620-71fc-4ab2-90cb-08a76aec00bc name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.005123483Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c9b59a97-6285-4a21-980e-8c91efc4fa46 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.005391014Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-xjdg9,Uid:5e5d1049-6c89-429b-96a8-cbb8abd2b26f,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721363163445110549,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T04:26:03.130636712Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40cd7297d1d53fed31be961d6e39847b14d8d75a0e4eca3b0c9b05a3cec7ac54,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bf27da3d-f736-4742-9af5-2c0a024075ec,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1721363014931604350,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-19T04:23:34.615001564Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-hwdsq,Uid:894f9528-78da-4cae-9ec6-8e82a3e73264,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721363014930121691,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T04:23:34.612478545Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7wzcg,Uid:a434f69a-903d-4961-a54c-9a85cbc694b1,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1721363014924401158,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T04:23:34.608406566Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&PodSandboxMetadata{Name:kindnet-fsr5f,Uid:988e1118-927a-4468-ba25-3a78d8d06919,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721363002744740119,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-07-19T04:23:21.836179319Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&PodSandboxMetadata{Name:kube-proxy-8dbqt,Uid:cd11aac3-62df-4603-8102-3384bcc100f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721363002712554861,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T04:23:21.804029250Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:42a74695a301994a8fe69f505b946596a45928011a694e7f458b0030c12c6c11,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-925161,Uid:1eda1524f631b786182d69b02283573f,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1721362982716796050,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eda1524f631b786182d69b02283573f,},Annotations:map[string]string{kubernetes.io/config.hash: 1eda1524f631b786182d69b02283573f,kubernetes.io/config.seen: 2024-07-19T04:23:02.251601243Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a1d0203f57600d7f98a4d21b8e859ad53d31a54211458e99baede150d4f27f62,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-925161,Uid:349099d3ab7836a83b145a30eb9936d6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721362982703553436,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,tier: control-plane,},Annotations:map[string]string{kube
rnetes.io/config.hash: 349099d3ab7836a83b145a30eb9936d6,kubernetes.io/config.seen: 2024-07-19T04:23:02.251594471Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&PodSandboxMetadata{Name:etcd-ha-925161,Uid:36cca920f3f48d0fa2da37f2a22f12ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721362982702071453,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.246:2379,kubernetes.io/config.hash: 36cca920f3f48d0fa2da37f2a22f12ba,kubernetes.io/config.seen: 2024-07-19T04:23:02.251589477Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&Pod
SandboxMetadata{Name:kube-scheduler-ha-925161,Uid:aa73bd154bae08cde433b82e51ec78df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721362982700414647,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa73bd154bae08cde433b82e51ec78df,kubernetes.io/config.seen: 2024-07-19T04:23:02.251600133Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5deb82997eca5aa2cd0fcbe3083dd4d824032623e4e1727dd40d362c5defc745,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-925161,Uid:7c423aaede6d00f00e13551d35c79c4b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721362982699546074,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.246:8443,kubernetes.io/config.hash: 7c423aaede6d00f00e13551d35c79c4b,kubernetes.io/config.seen: 2024-07-19T04:23:02.251592777Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c9b59a97-6285-4a21-980e-8c91efc4fa46 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.006302759Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5fdcb77-a91c-4fd5-ad48-b7c2a12a6a09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.006360008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5fdcb77-a91c-4fd5-ad48-b7c2a12a6a09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.006604679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363166324611006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015205884262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015144650485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c755e5c5cff44f8a7c38a73192c243bbcdb84c3f5da3847d21531941a8b95d93,PodSandboxId:40cd7297d1d53fed31be961d6e39847b14d8d75a0e4eca3b0c9b05a3cec7ac54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721363015082766923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213630
03130579717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363002828843500,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412,PodSandboxId:42a74695a301994a8fe69f505b946596a45928011a694e7f458b0030c12c6c11,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721362985963244969,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eda1524f631b786182d69b02283573f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721362982966094602,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721362982930573061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012,PodSandboxId:5deb82997eca5aa2cd0fcbe3083dd4d824032623e4e1727dd40d362c5defc745,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721362982917267786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513,PodSandboxId:a1d0203f57600d7f98a4d21b8e859ad53d31a54211458e99baede150d4f27f62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721362982883247583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5fdcb77-a91c-4fd5-ad48-b7c2a12a6a09 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.015570397Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f1ecd401-ae9b-432c-b7ff-2bf9c6d427e2 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.015643164Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1ecd401-ae9b-432c-b7ff-2bf9c6d427e2 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.016698638Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8022fa5-622c-46c3-b57b-4eafc64bc992 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.017405113Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363492017380663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8022fa5-622c-46c3-b57b-4eafc64bc992 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.018129963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ace5b3b-a036-47c0-b3a6-7d62ca7398e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.018194693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ace5b3b-a036-47c0-b3a6-7d62ca7398e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.018451681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363166324611006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015205884262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015144650485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c755e5c5cff44f8a7c38a73192c243bbcdb84c3f5da3847d21531941a8b95d93,PodSandboxId:40cd7297d1d53fed31be961d6e39847b14d8d75a0e4eca3b0c9b05a3cec7ac54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721363015082766923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213630
03130579717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363002828843500,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412,PodSandboxId:42a74695a301994a8fe69f505b946596a45928011a694e7f458b0030c12c6c11,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721362985963244969,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eda1524f631b786182d69b02283573f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721362982966094602,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721362982930573061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012,PodSandboxId:5deb82997eca5aa2cd0fcbe3083dd4d824032623e4e1727dd40d362c5defc745,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721362982917267786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513,PodSandboxId:a1d0203f57600d7f98a4d21b8e859ad53d31a54211458e99baede150d4f27f62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721362982883247583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ace5b3b-a036-47c0-b3a6-7d62ca7398e1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.057329470Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a61d250d-3e53-47d7-ba84-beddabefb2ea name=/runtime.v1.RuntimeService/Version
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.057401567Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a61d250d-3e53-47d7-ba84-beddabefb2ea name=/runtime.v1.RuntimeService/Version
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.058192519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfe34f47-dc68-436c-a6f9-213d5f281f4b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.058588016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363492058565888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfe34f47-dc68-436c-a6f9-213d5f281f4b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.059252576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6db9a7e1-2372-45c9-bb51-be4b4f20dde3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.059329520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6db9a7e1-2372-45c9-bb51-be4b4f20dde3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:31:32 ha-925161 crio[682]: time="2024-07-19 04:31:32.059576847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363166324611006,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015205884262,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363015144650485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c755e5c5cff44f8a7c38a73192c243bbcdb84c3f5da3847d21531941a8b95d93,PodSandboxId:40cd7297d1d53fed31be961d6e39847b14d8d75a0e4eca3b0c9b05a3cec7ac54,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1721363015082766923,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:17213630
03130579717,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363002828843500,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412,PodSandboxId:42a74695a301994a8fe69f505b946596a45928011a694e7f458b0030c12c6c11,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721362985963244969,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eda1524f631b786182d69b02283573f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721362982966094602,Labels:map[string]string{io.kubernetes.container.name: kub
e-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721362982930573061,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-
ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012,PodSandboxId:5deb82997eca5aa2cd0fcbe3083dd4d824032623e4e1727dd40d362c5defc745,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721362982917267786,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513,PodSandboxId:a1d0203f57600d7f98a4d21b8e859ad53d31a54211458e99baede150d4f27f62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721362982883247583,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6db9a7e1-2372-45c9-bb51-be4b4f20dde3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	376dac90130c2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   5 minutes ago       Running             busybox                   0                   0d44fb43a7c0f       busybox-fc5497c4f-xjdg9
	f8fbd19dd4d99       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   0bb04d64362d6       coredns-7db6d8ff4d-hwdsq
	14f21e70e6b65       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   62bcd5e2d22cb       coredns-7db6d8ff4d-7wzcg
	c755e5c5cff44       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   40cd7297d1d53       storage-provisioner
	1109d10f2b3d4       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      8 minutes ago       Running             kindnet-cni               0                   b3c277ef1f53b       kindnet-fsr5f
	6c9e12889a166       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago       Running             kube-proxy                0                   696364d98fd5c       kube-proxy-8dbqt
	ae55b7f5bd7bf       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   42a74695a3019       kube-vip-ha-925161
	eeef22350ca0f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago       Running             kube-scheduler            0                   fa3836c68c71d       kube-scheduler-ha-925161
	b041f48cc90cf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago       Running             etcd                      0                   a03be60cf1fe9       etcd-ha-925161
	6794bae567b7e       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago       Running             kube-apiserver            0                   5deb82997eca5       kube-apiserver-ha-925161
	882ed073edd75       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago       Running             kube-controller-manager   0                   a1d0203f57600       kube-controller-manager-ha-925161
	
	
	==> coredns [14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691] <==
	[INFO] 10.244.0.4:60754 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129059s
	[INFO] 10.244.0.4:43447 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000075335s
	[INFO] 10.244.0.4:60737 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000497893s
	[INFO] 10.244.0.4:51122 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001603238s
	[INFO] 10.244.1.2:37547 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201994s
	[INFO] 10.244.1.2:41971 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00346851s
	[INFO] 10.244.1.2:57720 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114773s
	[INFO] 10.244.2.3:58305 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001754058s
	[INFO] 10.244.2.3:54206 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118435s
	[INFO] 10.244.2.3:37056 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234861s
	[INFO] 10.244.2.3:45425 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073142s
	[INFO] 10.244.0.4:54647 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007602s
	[INFO] 10.244.0.4:33742 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001338144s
	[INFO] 10.244.1.2:58214 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123014s
	[INFO] 10.244.1.2:58591 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083326s
	[INFO] 10.244.1.2:33227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196172s
	[INFO] 10.244.2.3:49582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115766s
	[INFO] 10.244.2.3:46761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109526s
	[INFO] 10.244.0.4:50248 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066399s
	[INFO] 10.244.1.2:45766 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012847s
	[INFO] 10.244.1.2:57759 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145394s
	[INFO] 10.244.2.3:50037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160043s
	[INFO] 10.244.2.3:49469 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075305s
	[INFO] 10.244.2.3:39504 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000057986s
	[INFO] 10.244.0.4:39098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096095s
	
	
	==> coredns [f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672] <==
	[INFO] 10.244.1.2:34010 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219789s
	[INFO] 10.244.1.2:47044 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126724s
	[INFO] 10.244.1.2:42035 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109926s
	[INFO] 10.244.2.3:42792 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145146s
	[INFO] 10.244.2.3:38794 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000083694s
	[INFO] 10.244.2.3:48698 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001253504s
	[INFO] 10.244.2.3:45424 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060715s
	[INFO] 10.244.0.4:53435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016485s
	[INFO] 10.244.0.4:47050 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790838s
	[INFO] 10.244.0.4:38074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058109s
	[INFO] 10.244.0.4:53487 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066861s
	[INFO] 10.244.0.4:48230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012907s
	[INFO] 10.244.0.4:45713 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053151s
	[INFO] 10.244.1.2:40224 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119446s
	[INFO] 10.244.2.3:48643 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101063s
	[INFO] 10.244.2.3:59393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008526s
	[INFO] 10.244.0.4:38457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103892s
	[INFO] 10.244.0.4:36242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015645s
	[INFO] 10.244.0.4:47871 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076477s
	[INFO] 10.244.1.2:44263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176905s
	[INFO] 10.244.1.2:56297 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215661s
	[INFO] 10.244.2.3:45341 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148843s
	[INFO] 10.244.0.4:41990 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105346s
	[INFO] 10.244.0.4:43204 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121535s
	[INFO] 10.244.0.4:60972 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251518s
	
	
	==> describe nodes <==
	Name:               ha-925161
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_23_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:23:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:31:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:31:19 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:31:19 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:31:19 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:31:19 +0000   Fri, 19 Jul 2024 04:23:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-925161
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff8c87164fa44c4f827d29ad58165cee
	  System UUID:                ff8c8716-4fa4-4c4f-827d-29ad58165cee
	  Boot ID:                    82d231ce-d7a6-41a1-a656-2e7410a6f84c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xjdg9              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 coredns-7db6d8ff4d-7wzcg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m10s
	  kube-system                 coredns-7db6d8ff4d-hwdsq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m10s
	  kube-system                 etcd-ha-925161                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m23s
	  kube-system                 kindnet-fsr5f                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m11s
	  kube-system                 kube-apiserver-ha-925161             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-controller-manager-ha-925161    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-proxy-8dbqt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m11s
	  kube-system                 kube-scheduler-ha-925161             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-vip-ha-925161                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m9s   kube-proxy       
	  Normal  Starting                 8m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m23s  kubelet          Node ha-925161 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s  kubelet          Node ha-925161 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m23s  kubelet          Node ha-925161 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m11s  node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal  NodeReady                7m58s  kubelet          Node ha-925161 status is now: NodeReady
	  Normal  RegisteredNode           6m55s  node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal  RegisteredNode           5m40s  node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	
	
	Name:               ha-925161-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_24_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:24:20 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:28:04 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 04:26:22 +0000   Fri, 19 Jul 2024 04:28:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 04:26:22 +0000   Fri, 19 Jul 2024 04:28:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 04:26:22 +0000   Fri, 19 Jul 2024 04:28:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 04:26:22 +0000   Fri, 19 Jul 2024 04:28:46 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-925161-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9158ff8415464fc08c01f2344e6694f7
	  System UUID:                9158ff84-1546-4fc0-8c01-f2344e6694f7
	  Boot ID:                    94533959-ddf8-4bdd-b493-22c20551603d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5785p                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 etcd-ha-925161-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m10s
	  kube-system                 kindnet-dkctc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m12s
	  kube-system                 kube-apiserver-ha-925161-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-controller-manager-ha-925161-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-proxy-s6df4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 kube-scheduler-ha-925161-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	  kube-system                 kube-vip-ha-925161-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  7m12s (x8 over 7m12s)  kubelet          Node ha-925161-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m12s (x8 over 7m12s)  kubelet          Node ha-925161-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m12s (x7 over 7m12s)  kubelet          Node ha-925161-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m11s                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           6m55s                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  NodeNotReady             2m46s                  node-controller  Node ha-925161-m02 status is now: NodeNotReady
	
	
	Name:               ha-925161-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_25_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:25:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:31:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:26:36 +0000   Fri, 19 Jul 2024 04:25:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:26:36 +0000   Fri, 19 Jul 2024 04:25:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:26:36 +0000   Fri, 19 Jul 2024 04:25:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:26:36 +0000   Fri, 19 Jul 2024 04:25:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    ha-925161-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e81f7ca95c24874b7c002cc8e188173
	  System UUID:                3e81f7ca-95c2-4874-b7c0-02cc8e188173
	  Boot ID:                    b4cf88f1-2acb-4810-bae4-c71b13ffc20e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t2m4d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 etcd-ha-925161-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m56s
	  kube-system                 kindnet-7gvt6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-apiserver-ha-925161-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-controller-manager-ha-925161-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-j6526                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-scheduler-ha-925161-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-vip-ha-925161-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet          Node ha-925161-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet          Node ha-925161-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x7 over 5m58s)  kubelet          Node ha-925161-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m56s                  node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	  Normal  RegisteredNode           5m55s                  node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	  Normal  RegisteredNode           5m40s                  node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	
	
	Name:               ha-925161-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_27_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:27:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:31:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:28:00 +0000   Fri, 19 Jul 2024 04:27:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:28:00 +0000   Fri, 19 Jul 2024 04:27:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:28:00 +0000   Fri, 19 Jul 2024 04:27:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:28:00 +0000   Fri, 19 Jul 2024 04:27:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.75
	  Hostname:    ha-925161-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e08274d02fa64707986686183076854f
	  System UUID:                e08274d0-2fa6-4707-9866-86183076854f
	  Boot ID:                    efd3e24c-8ce7-42df-8dd5-30a44f998179
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dnwxp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m3s
	  kube-system                 kube-proxy-f4fgd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m3s (x3 over 4m3s)  kubelet          Node ha-925161-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x3 over 4m3s)  kubelet          Node ha-925161-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x3 over 4m3s)  kubelet          Node ha-925161-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal  NodeReady                3m43s                kubelet          Node ha-925161-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul19 04:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050649] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037163] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.426710] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.747525] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.441980] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.442247] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.062592] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054468] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.195426] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.118864] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.257746] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.980513] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[Jul19 04:23] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.065569] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.069928] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.091097] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.840611] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.224120] kauditd_printk_skb: 38 callbacks suppressed
	[Jul19 04:24] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010] <==
	{"level":"warn","ts":"2024-07-19T04:31:32.263742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.320856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.330431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.3342Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.352562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.362693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.362882Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.370403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.374739Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.379181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.385146Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.391872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.400048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.407482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.414917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.417639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.425123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.430575Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.437566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.440779Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.444502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.450213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.456659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.462836Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-19T04:31:32.46302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"b19954eb16571c64","from":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 04:31:32 up 8 min,  0 users,  load average: 0.19, 0.21, 0.11
	Linux ha-925161 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036] <==
	I0719 04:30:54.195333       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:31:04.204050       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:31:04.204149       1 main.go:303] handling current node
	I0719 04:31:04.204177       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:31:04.204195       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:31:04.204360       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:31:04.204381       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:31:04.204491       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:31:04.204517       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:31:14.204151       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:31:14.204191       1 main.go:303] handling current node
	I0719 04:31:14.204219       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:31:14.204227       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:31:14.204371       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:31:14.204392       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:31:14.204450       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:31:14.204477       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:31:24.195242       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:31:24.195291       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:31:24.195439       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:31:24.195462       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:31:24.195510       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:31:24.195527       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:31:24.195613       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:31:24.195620       1 main.go:303] handling current node
	
	
	==> kube-apiserver [6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012] <==
	I0719 04:23:07.455068       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0719 04:23:07.460827       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.246]
	I0719 04:23:07.461717       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 04:23:07.466412       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 04:23:07.763875       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 04:23:09.195985       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 04:23:09.221533       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 04:23:09.235107       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 04:23:21.771412       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 04:23:21.881186       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0719 04:26:59.223684       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42422: use of closed network connection
	E0719 04:26:59.417925       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42438: use of closed network connection
	E0719 04:26:59.776835       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42488: use of closed network connection
	E0719 04:26:59.955113       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42502: use of closed network connection
	E0719 04:27:00.136541       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42532: use of closed network connection
	E0719 04:27:00.339873       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42552: use of closed network connection
	E0719 04:27:00.525493       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42564: use of closed network connection
	E0719 04:27:00.694817       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42588: use of closed network connection
	E0719 04:27:01.006092       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42618: use of closed network connection
	E0719 04:27:01.188324       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42634: use of closed network connection
	E0719 04:27:01.374442       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42650: use of closed network connection
	E0719 04:27:01.546875       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42662: use of closed network connection
	E0719 04:27:01.720064       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42674: use of closed network connection
	E0719 04:27:01.898991       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42706: use of closed network connection
	W0719 04:28:27.475164       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.190 192.168.39.246]
	
	
	==> kube-controller-manager [882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513] <==
	I0719 04:26:03.406414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.175077ms"
	I0719 04:26:03.431612       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.518832ms"
	I0719 04:26:03.431750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.533µs"
	I0719 04:26:03.568596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="99.845964ms"
	E0719 04:26:03.568628       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0719 04:26:03.568710       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.525µs"
	I0719 04:26:03.575124       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.506µs"
	I0719 04:26:04.683397       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.891µs"
	I0719 04:26:06.870145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.991928ms"
	I0719 04:26:06.870711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="121.023µs"
	I0719 04:26:06.966996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.716895ms"
	I0719 04:26:06.967301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="109.612µs"
	I0719 04:26:08.700595       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.120877ms"
	I0719 04:26:08.700847       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.911µs"
	I0719 04:26:37.073643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="94.391µs"
	I0719 04:26:38.035158       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="97.715µs"
	I0719 04:26:38.055651       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.58µs"
	I0719 04:26:38.065132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.611µs"
	I0719 04:27:29.839844       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-925161-m04\" does not exist"
	I0719 04:27:29.872312       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-925161-m04" podCIDRs=["10.244.3.0/24"]
	I0719 04:27:31.298051       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-925161-m04"
	I0719 04:27:49.928802       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-925161-m04"
	I0719 04:28:46.337379       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-925161-m04"
	I0719 04:28:46.465735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.522562ms"
	I0719 04:28:46.468259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.595µs"
	
	
	==> kube-proxy [6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6] <==
	I0719 04:23:23.013567       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:23:23.037502       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.246"]
	I0719 04:23:23.076100       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:23:23.076198       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:23:23.076252       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:23:23.080405       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:23:23.081098       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:23:23.081123       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:23:23.083190       1 config.go:192] "Starting service config controller"
	I0719 04:23:23.083504       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:23:23.083558       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:23:23.083576       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:23:23.084640       1 config.go:319] "Starting node config controller"
	I0719 04:23:23.084667       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:23:23.184399       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:23:23.184522       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:23:23.184817       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23] <==
	W0719 04:23:07.117760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 04:23:07.117890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:23:07.179619       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 04:23:07.179713       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 04:23:10.118015       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 04:25:34.802812       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-7gvt6\": pod kindnet-7gvt6 is already assigned to node \"ha-925161-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-7gvt6" node="ha-925161-m03"
	E0719 04:25:34.803093       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 3980fcc1-695c-4b62-aab6-93872f4ddc11(kube-system/kindnet-7gvt6) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-7gvt6"
	E0719 04:25:34.803142       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-7gvt6\": pod kindnet-7gvt6 is already assigned to node \"ha-925161-m03\"" pod="kube-system/kindnet-7gvt6"
	I0719 04:25:34.803192       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-7gvt6" node="ha-925161-m03"
	E0719 04:25:34.803317       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-j6526\": pod kube-proxy-j6526 is already assigned to node \"ha-925161-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-j6526" node="ha-925161-m03"
	E0719 04:25:34.803378       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 20b69c28-de0f-4ed7-846c-848d9e938c46(kube-system/kube-proxy-j6526) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-j6526"
	E0719 04:25:34.805910       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-j6526\": pod kube-proxy-j6526 is already assigned to node \"ha-925161-m03\"" pod="kube-system/kube-proxy-j6526"
	I0719 04:25:34.806120       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-j6526" node="ha-925161-m03"
	E0719 04:26:03.007466       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-h8rpn\": pod busybox-fc5497c4f-h8rpn is already assigned to node \"ha-925161-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-h8rpn" node="ha-925161-m02"
	E0719 04:26:03.007620       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-h8rpn\": pod busybox-fc5497c4f-h8rpn is already assigned to node \"ha-925161-m03\"" pod="default/busybox-fc5497c4f-h8rpn"
	E0719 04:27:29.902023       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-f4fgd\": pod kube-proxy-f4fgd is already assigned to node \"ha-925161-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-f4fgd" node="ha-925161-m04"
	E0719 04:27:29.902117       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-f4fgd\": pod kube-proxy-f4fgd is already assigned to node \"ha-925161-m04\"" pod="kube-system/kube-proxy-f4fgd"
	E0719 04:27:29.950616       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dnwxp\": pod kindnet-dnwxp is already assigned to node \"ha-925161-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dnwxp" node="ha-925161-m04"
	E0719 04:27:29.952590       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bb80bffc-8a33-4e45-9d7e-560526e289a7(kube-system/kindnet-dnwxp) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dnwxp"
	E0719 04:27:29.952714       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dnwxp\": pod kindnet-dnwxp is already assigned to node \"ha-925161-m04\"" pod="kube-system/kindnet-dnwxp"
	I0719 04:27:29.952830       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dnwxp" node="ha-925161-m04"
	E0719 04:27:30.048921       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2cxws\": pod kindnet-2cxws is already assigned to node \"ha-925161-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-2cxws" node="ha-925161-m04"
	E0719 04:27:30.051009       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bf5c4d4d-bf9a-42c4-8e17-ded79b29fbf0(kube-system/kindnet-2cxws) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-2cxws"
	E0719 04:27:30.051082       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2cxws\": pod kindnet-2cxws is already assigned to node \"ha-925161-m04\"" pod="kube-system/kindnet-2cxws"
	I0719 04:27:30.051128       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2cxws" node="ha-925161-m04"
	
	
	==> kubelet <==
	Jul 19 04:27:09 ha-925161 kubelet[1377]: E0719 04:27:09.121109    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:27:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:27:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:27:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:27:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:28:09 ha-925161 kubelet[1377]: E0719 04:28:09.118663    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:28:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:28:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:28:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:28:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:29:09 ha-925161 kubelet[1377]: E0719 04:29:09.118773    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:29:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:29:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:29:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:29:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:30:09 ha-925161 kubelet[1377]: E0719 04:30:09.118645    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:30:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:30:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:30:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:30:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:31:09 ha-925161 kubelet[1377]: E0719 04:31:09.118445    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:31:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:31:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:31:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:31:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-925161 -n ha-925161
helpers_test.go:261: (dbg) Run:  kubectl --context ha-925161 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-925161 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-925161 -v=7 --alsologtostderr
E0719 04:31:36.834746  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:32:04.519074  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-925161 -v=7 --alsologtostderr: exit status 82 (2m1.793809681s)

                                                
                                                
-- stdout --
	* Stopping node "ha-925161-m04"  ...
	* Stopping node "ha-925161-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:31:33.892532  151353 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:31:33.892895  151353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:33.892936  151353 out.go:304] Setting ErrFile to fd 2...
	I0719 04:31:33.892945  151353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:31:33.893417  151353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:31:33.893764  151353 out.go:298] Setting JSON to false
	I0719 04:31:33.893932  151353 mustload.go:65] Loading cluster: ha-925161
	I0719 04:31:33.894738  151353 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:33.894923  151353 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:31:33.895145  151353 mustload.go:65] Loading cluster: ha-925161
	I0719 04:31:33.895343  151353 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:31:33.895400  151353 stop.go:39] StopHost: ha-925161-m04
	I0719 04:31:33.895945  151353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:33.895991  151353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:33.910770  151353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34693
	I0719 04:31:33.911296  151353 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:33.911860  151353 main.go:141] libmachine: Using API Version  1
	I0719 04:31:33.911884  151353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:33.912291  151353 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:33.914698  151353 out.go:177] * Stopping node "ha-925161-m04"  ...
	I0719 04:31:33.915853  151353 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 04:31:33.915898  151353 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:31:33.916138  151353 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 04:31:33.916162  151353 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:31:33.919109  151353 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:33.919485  151353 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:27:16 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:31:33.919503  151353 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:31:33.919605  151353 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:31:33.919768  151353 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:31:33.919927  151353 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:31:33.920066  151353 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:31:34.010577  151353 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 04:31:34.063869  151353 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 04:31:34.116767  151353 main.go:141] libmachine: Stopping "ha-925161-m04"...
	I0719 04:31:34.116795  151353 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:31:34.118322  151353 main.go:141] libmachine: (ha-925161-m04) Calling .Stop
	I0719 04:31:34.121719  151353 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 0/120
	I0719 04:31:35.225651  151353 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:31:35.226798  151353 main.go:141] libmachine: Machine "ha-925161-m04" was stopped.
	I0719 04:31:35.226819  151353 stop.go:75] duration metric: took 1.31096657s to stop
	I0719 04:31:35.226840  151353 stop.go:39] StopHost: ha-925161-m03
	I0719 04:31:35.227136  151353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:31:35.227172  151353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:31:35.242062  151353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0719 04:31:35.242536  151353 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:31:35.243051  151353 main.go:141] libmachine: Using API Version  1
	I0719 04:31:35.243073  151353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:31:35.243402  151353 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:31:35.245506  151353 out.go:177] * Stopping node "ha-925161-m03"  ...
	I0719 04:31:35.246650  151353 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 04:31:35.246673  151353 main.go:141] libmachine: (ha-925161-m03) Calling .DriverName
	I0719 04:31:35.246887  151353 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 04:31:35.246910  151353 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHHostname
	I0719 04:31:35.249807  151353 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:35.250316  151353 main.go:141] libmachine: (ha-925161-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7e:5f:eb", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:24:58 +0000 UTC Type:0 Mac:52:54:00:7e:5f:eb Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-925161-m03 Clientid:01:52:54:00:7e:5f:eb}
	I0719 04:31:35.250346  151353 main.go:141] libmachine: (ha-925161-m03) DBG | domain ha-925161-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:7e:5f:eb in network mk-ha-925161
	I0719 04:31:35.250549  151353 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHPort
	I0719 04:31:35.250720  151353 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHKeyPath
	I0719 04:31:35.250882  151353 main.go:141] libmachine: (ha-925161-m03) Calling .GetSSHUsername
	I0719 04:31:35.251043  151353 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m03/id_rsa Username:docker}
	I0719 04:31:35.341842  151353 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 04:31:35.395011  151353 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 04:31:35.449671  151353 main.go:141] libmachine: Stopping "ha-925161-m03"...
	I0719 04:31:35.449701  151353 main.go:141] libmachine: (ha-925161-m03) Calling .GetState
	I0719 04:31:35.451157  151353 main.go:141] libmachine: (ha-925161-m03) Calling .Stop
	I0719 04:31:35.454700  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 0/120
	I0719 04:31:36.456126  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 1/120
	I0719 04:31:37.458075  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 2/120
	I0719 04:31:38.459516  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 3/120
	I0719 04:31:39.460905  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 4/120
	I0719 04:31:40.462857  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 5/120
	I0719 04:31:41.464497  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 6/120
	I0719 04:31:42.465894  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 7/120
	I0719 04:31:43.467503  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 8/120
	I0719 04:31:44.468895  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 9/120
	I0719 04:31:45.470799  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 10/120
	I0719 04:31:46.472219  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 11/120
	I0719 04:31:47.473520  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 12/120
	I0719 04:31:48.474892  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 13/120
	I0719 04:31:49.476258  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 14/120
	I0719 04:31:50.478507  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 15/120
	I0719 04:31:51.479832  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 16/120
	I0719 04:31:52.481280  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 17/120
	I0719 04:31:53.482837  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 18/120
	I0719 04:31:54.484059  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 19/120
	I0719 04:31:55.486364  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 20/120
	I0719 04:31:56.487835  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 21/120
	I0719 04:31:57.489729  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 22/120
	I0719 04:31:58.491223  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 23/120
	I0719 04:31:59.492669  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 24/120
	I0719 04:32:00.494492  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 25/120
	I0719 04:32:01.495932  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 26/120
	I0719 04:32:02.497670  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 27/120
	I0719 04:32:03.499196  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 28/120
	I0719 04:32:04.500716  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 29/120
	I0719 04:32:05.502596  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 30/120
	I0719 04:32:06.504256  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 31/120
	I0719 04:32:07.505720  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 32/120
	I0719 04:32:08.507163  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 33/120
	I0719 04:32:09.508606  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 34/120
	I0719 04:32:10.510263  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 35/120
	I0719 04:32:11.511524  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 36/120
	I0719 04:32:12.513019  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 37/120
	I0719 04:32:13.514368  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 38/120
	I0719 04:32:14.516113  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 39/120
	I0719 04:32:15.518579  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 40/120
	I0719 04:32:16.519906  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 41/120
	I0719 04:32:17.521465  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 42/120
	I0719 04:32:18.522779  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 43/120
	I0719 04:32:19.523972  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 44/120
	I0719 04:32:20.525488  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 45/120
	I0719 04:32:21.527628  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 46/120
	I0719 04:32:22.529021  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 47/120
	I0719 04:32:23.530407  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 48/120
	I0719 04:32:24.531950  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 49/120
	I0719 04:32:25.533807  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 50/120
	I0719 04:32:26.535891  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 51/120
	I0719 04:32:27.538108  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 52/120
	I0719 04:32:28.539613  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 53/120
	I0719 04:32:29.541116  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 54/120
	I0719 04:32:30.542933  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 55/120
	I0719 04:32:31.544324  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 56/120
	I0719 04:32:32.545576  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 57/120
	I0719 04:32:33.546822  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 58/120
	I0719 04:32:34.548006  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 59/120
	I0719 04:32:35.549646  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 60/120
	I0719 04:32:36.550920  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 61/120
	I0719 04:32:37.552194  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 62/120
	I0719 04:32:38.553485  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 63/120
	I0719 04:32:39.554727  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 64/120
	I0719 04:32:40.556566  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 65/120
	I0719 04:32:41.557862  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 66/120
	I0719 04:32:42.559052  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 67/120
	I0719 04:32:43.560450  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 68/120
	I0719 04:32:44.562742  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 69/120
	I0719 04:32:45.564519  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 70/120
	I0719 04:32:46.566082  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 71/120
	I0719 04:32:47.567199  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 72/120
	I0719 04:32:48.568604  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 73/120
	I0719 04:32:49.569809  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 74/120
	I0719 04:32:50.571347  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 75/120
	I0719 04:32:51.572779  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 76/120
	I0719 04:32:52.574941  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 77/120
	I0719 04:32:53.576384  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 78/120
	I0719 04:32:54.577742  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 79/120
	I0719 04:32:55.579525  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 80/120
	I0719 04:32:56.580781  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 81/120
	I0719 04:32:57.581972  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 82/120
	I0719 04:32:58.583394  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 83/120
	I0719 04:32:59.584910  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 84/120
	I0719 04:33:00.586604  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 85/120
	I0719 04:33:01.588274  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 86/120
	I0719 04:33:02.589985  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 87/120
	I0719 04:33:03.591468  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 88/120
	I0719 04:33:04.592662  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 89/120
	I0719 04:33:05.594158  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 90/120
	I0719 04:33:06.595522  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 91/120
	I0719 04:33:07.596707  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 92/120
	I0719 04:33:08.598219  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 93/120
	I0719 04:33:09.599489  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 94/120
	I0719 04:33:10.601147  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 95/120
	I0719 04:33:11.603455  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 96/120
	I0719 04:33:12.604817  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 97/120
	I0719 04:33:13.606004  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 98/120
	I0719 04:33:14.607356  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 99/120
	I0719 04:33:15.609092  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 100/120
	I0719 04:33:16.610419  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 101/120
	I0719 04:33:17.611837  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 102/120
	I0719 04:33:18.613285  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 103/120
	I0719 04:33:19.614815  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 104/120
	I0719 04:33:20.616434  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 105/120
	I0719 04:33:21.617785  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 106/120
	I0719 04:33:22.619038  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 107/120
	I0719 04:33:23.620373  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 108/120
	I0719 04:33:24.621787  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 109/120
	I0719 04:33:25.623582  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 110/120
	I0719 04:33:26.625140  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 111/120
	I0719 04:33:27.626476  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 112/120
	I0719 04:33:28.627755  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 113/120
	I0719 04:33:29.629224  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 114/120
	I0719 04:33:30.630916  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 115/120
	I0719 04:33:31.632461  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 116/120
	I0719 04:33:32.633867  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 117/120
	I0719 04:33:33.635190  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 118/120
	I0719 04:33:34.636572  151353 main.go:141] libmachine: (ha-925161-m03) Waiting for machine to stop 119/120
	I0719 04:33:35.637612  151353 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 04:33:35.637693  151353 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 04:33:35.639637  151353 out.go:177] 
	W0719 04:33:35.641437  151353 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 04:33:35.641464  151353 out.go:239] * 
	* 
	W0719 04:33:35.643624  151353 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 04:33:35.645017  151353 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-925161 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-925161 --wait=true -v=7 --alsologtostderr
E0719 04:36:36.834995  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-925161 --wait=true -v=7 --alsologtostderr: (4m1.326675834s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-925161
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-925161 -n ha-925161
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-925161 logs -n 25: (1.693347008s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m02:/home/docker/cp-test_ha-925161-m03_ha-925161-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m02 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04:/home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m04 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp testdata/cp-test.txt                                                | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3159028946/001/cp-test_ha-925161-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161:/home/docker/cp-test_ha-925161-m04_ha-925161.txt                       |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161 sudo cat                                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161.txt                                 |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m02:/home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m02 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03:/home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m03 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-925161 node stop m02 -v=7                                                     | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-925161 node start m02 -v=7                                                    | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-925161 -v=7                                                           | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-925161 -v=7                                                                | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-925161 --wait=true -v=7                                                    | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:33 UTC | 19 Jul 24 04:37 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-925161                                                                | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:37 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:33:35
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:33:35.692545  151865 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:33:35.692686  151865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:33:35.692697  151865 out.go:304] Setting ErrFile to fd 2...
	I0719 04:33:35.692703  151865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:33:35.693141  151865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:33:35.693745  151865 out.go:298] Setting JSON to false
	I0719 04:33:35.694657  151865 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8159,"bootTime":1721355457,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 04:33:35.694716  151865 start.go:139] virtualization: kvm guest
	I0719 04:33:35.697285  151865 out.go:177] * [ha-925161] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 04:33:35.699142  151865 notify.go:220] Checking for updates...
	I0719 04:33:35.699161  151865 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:33:35.700706  151865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:33:35.702204  151865 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:33:35.703629  151865 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:33:35.704810  151865 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 04:33:35.705933  151865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:33:35.707522  151865 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:33:35.707656  151865 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:33:35.708094  151865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:33:35.708142  151865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:33:35.723581  151865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0719 04:33:35.724070  151865 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:33:35.724608  151865 main.go:141] libmachine: Using API Version  1
	I0719 04:33:35.724629  151865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:33:35.725037  151865 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:33:35.725283  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:33:35.760630  151865 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 04:33:35.761871  151865 start.go:297] selected driver: kvm2
	I0719 04:33:35.761891  151865 start.go:901] validating driver "kvm2" against &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:33:35.762052  151865 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:33:35.762386  151865 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:33:35.762462  151865 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 04:33:35.777973  151865 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 04:33:35.778592  151865 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:33:35.778632  151865 cni.go:84] Creating CNI manager for ""
	I0719 04:33:35.778637  151865 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 04:33:35.778686  151865 start.go:340] cluster config:
	{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:33:35.778787  151865 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:33:35.780926  151865 out.go:177] * Starting "ha-925161" primary control-plane node in "ha-925161" cluster
	I0719 04:33:35.782331  151865 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:33:35.782369  151865 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 04:33:35.782377  151865 cache.go:56] Caching tarball of preloaded images
	I0719 04:33:35.782461  151865 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:33:35.782473  151865 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:33:35.782575  151865 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:33:35.782767  151865 start.go:360] acquireMachinesLock for ha-925161: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:33:35.782824  151865 start.go:364] duration metric: took 24.807µs to acquireMachinesLock for "ha-925161"
	I0719 04:33:35.782845  151865 start.go:96] Skipping create...Using existing machine configuration
	I0719 04:33:35.782853  151865 fix.go:54] fixHost starting: 
	I0719 04:33:35.783112  151865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:33:35.783136  151865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:33:35.797552  151865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0719 04:33:35.797951  151865 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:33:35.798497  151865 main.go:141] libmachine: Using API Version  1
	I0719 04:33:35.798516  151865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:33:35.798911  151865 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:33:35.799154  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:33:35.799324  151865 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:33:35.800875  151865 fix.go:112] recreateIfNeeded on ha-925161: state=Running err=<nil>
	W0719 04:33:35.800900  151865 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 04:33:35.802896  151865 out.go:177] * Updating the running kvm2 "ha-925161" VM ...
	I0719 04:33:35.804152  151865 machine.go:94] provisionDockerMachine start ...
	I0719 04:33:35.804172  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:33:35.804372  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:35.807109  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:35.807552  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:35.807584  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:35.807725  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:35.807894  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:35.808059  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:35.808198  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:35.808382  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:35.808567  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:33:35.808578  151865 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:33:35.926803  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161
	
	I0719 04:33:35.926885  151865 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:33:35.927210  151865 buildroot.go:166] provisioning hostname "ha-925161"
	I0719 04:33:35.927236  151865 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:33:35.927449  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:35.929933  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:35.930288  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:35.930319  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:35.930518  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:35.930691  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:35.930821  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:35.930971  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:35.931113  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:35.931311  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:33:35.931327  151865 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-925161 && echo "ha-925161" | sudo tee /etc/hostname
	I0719 04:33:36.061811  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161
	
	I0719 04:33:36.061854  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:36.064593  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.064981  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.065011  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.065247  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:36.065452  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.065608  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.065721  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:36.065851  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:36.066017  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:33:36.066033  151865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-925161' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-925161/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-925161' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:33:36.177916  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:33:36.177955  151865 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:33:36.177995  151865 buildroot.go:174] setting up certificates
	I0719 04:33:36.178010  151865 provision.go:84] configureAuth start
	I0719 04:33:36.178028  151865 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:33:36.178382  151865 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:33:36.180893  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.181268  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.181309  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.181462  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:36.183623  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.184017  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.184039  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.184212  151865 provision.go:143] copyHostCerts
	I0719 04:33:36.184256  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:33:36.184307  151865 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:33:36.184319  151865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:33:36.184414  151865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:33:36.184515  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:33:36.184541  151865 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:33:36.184548  151865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:33:36.184590  151865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:33:36.184651  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:33:36.184673  151865 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:33:36.184681  151865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:33:36.184713  151865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:33:36.184775  151865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.ha-925161 san=[127.0.0.1 192.168.39.246 ha-925161 localhost minikube]
	I0719 04:33:36.234680  151865 provision.go:177] copyRemoteCerts
	I0719 04:33:36.234742  151865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:33:36.234767  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:36.237251  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.237570  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.237594  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.237769  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:36.237947  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.238087  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:36.238221  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:33:36.323251  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:33:36.323330  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0719 04:33:36.346931  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:33:36.347034  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:33:36.370183  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:33:36.370266  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:33:36.392967  151865 provision.go:87] duration metric: took 214.93921ms to configureAuth
	I0719 04:33:36.392993  151865 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:33:36.393264  151865 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:33:36.393367  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:36.395947  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.396474  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.396506  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.396794  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:36.397012  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.397235  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.397386  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:36.397557  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:36.397752  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:33:36.397769  151865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:35:07.347124  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:35:07.347170  151865 machine.go:97] duration metric: took 1m31.54300109s to provisionDockerMachine
	I0719 04:35:07.347186  151865 start.go:293] postStartSetup for "ha-925161" (driver="kvm2")
	I0719 04:35:07.347212  151865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:35:07.347235  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.347590  151865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:35:07.347622  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.350874  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.351334  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.351368  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.351553  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.351736  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.351884  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.352043  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:35:07.436387  151865 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:35:07.440446  151865 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:35:07.440478  151865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:35:07.440571  151865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:35:07.440652  151865 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:35:07.440663  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:35:07.440744  151865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:35:07.449620  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:35:07.473090  151865 start.go:296] duration metric: took 125.866906ms for postStartSetup
	I0719 04:35:07.473139  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.473444  151865 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0719 04:35:07.473472  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.476155  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.476514  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.476542  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.476711  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.476902  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.477103  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.477235  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	W0719 04:35:07.559252  151865 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0719 04:35:07.559280  151865 fix.go:56] duration metric: took 1m31.776428279s for fixHost
	I0719 04:35:07.559303  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.562017  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.562292  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.562320  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.562479  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.562733  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.562909  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.563105  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.563276  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:35:07.563437  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:35:07.563447  151865 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:35:07.680985  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363707.638853844
	
	I0719 04:35:07.681009  151865 fix.go:216] guest clock: 1721363707.638853844
	I0719 04:35:07.681016  151865 fix.go:229] Guest: 2024-07-19 04:35:07.638853844 +0000 UTC Remote: 2024-07-19 04:35:07.55928743 +0000 UTC m=+91.903391287 (delta=79.566414ms)
	I0719 04:35:07.681035  151865 fix.go:200] guest clock delta is within tolerance: 79.566414ms
	I0719 04:35:07.681041  151865 start.go:83] releasing machines lock for "ha-925161", held for 1m31.898203709s
	I0719 04:35:07.681058  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.681408  151865 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:35:07.684253  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.684689  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.684720  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.684881  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.685468  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.685670  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.685783  151865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:35:07.685832  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.685901  151865 ssh_runner.go:195] Run: cat /version.json
	I0719 04:35:07.685927  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.688778  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.688802  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.689336  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.689363  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.689393  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.689406  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.689517  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.689670  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.689728  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.689810  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.689931  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.689974  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.690049  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:35:07.690163  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:35:07.845554  151865 ssh_runner.go:195] Run: systemctl --version
	I0719 04:35:07.852398  151865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:35:08.006430  151865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:35:08.012288  151865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:35:08.012379  151865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:35:08.021312  151865 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 04:35:08.021333  151865 start.go:495] detecting cgroup driver to use...
	I0719 04:35:08.021391  151865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:35:08.037463  151865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:35:08.051513  151865 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:35:08.051575  151865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:35:08.064817  151865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:35:08.077749  151865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:35:08.223946  151865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:35:08.373086  151865 docker.go:233] disabling docker service ...
	I0719 04:35:08.373179  151865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:35:08.391874  151865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:35:08.404580  151865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:35:08.559818  151865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:35:08.717444  151865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:35:08.732752  151865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:35:08.750258  151865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:35:08.750328  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.760675  151865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:35:08.760751  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.770640  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.780133  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.789546  151865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:35:08.799278  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.809283  151865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.819312  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.828716  151865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:35:08.837396  151865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:35:08.846127  151865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:35:08.989262  151865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:35:09.438106  151865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:35:09.438190  151865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:35:09.442887  151865 start.go:563] Will wait 60s for crictl version
	I0719 04:35:09.442934  151865 ssh_runner.go:195] Run: which crictl
	I0719 04:35:09.446388  151865 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:35:09.487544  151865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:35:09.487623  151865 ssh_runner.go:195] Run: crio --version
	I0719 04:35:09.514226  151865 ssh_runner.go:195] Run: crio --version
	I0719 04:35:09.542867  151865 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:35:09.544162  151865 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:35:09.546827  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:09.547202  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:09.547233  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:09.547408  151865 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:35:09.551827  151865 kubeadm.go:883] updating cluster {Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:35:09.551955  151865 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:35:09.551998  151865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:35:09.594369  151865 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:35:09.594393  151865 crio.go:433] Images already preloaded, skipping extraction
	I0719 04:35:09.594443  151865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:35:09.626693  151865 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:35:09.626718  151865 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:35:09.626729  151865 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.30.3 crio true true} ...
	I0719 04:35:09.626846  151865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-925161 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:35:09.626926  151865 ssh_runner.go:195] Run: crio config
	I0719 04:35:09.671551  151865 cni.go:84] Creating CNI manager for ""
	I0719 04:35:09.671576  151865 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 04:35:09.671586  151865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:35:09.671608  151865 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-925161 NodeName:ha-925161 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:35:09.671752  151865 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-925161"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:35:09.671770  151865 kube-vip.go:115] generating kube-vip config ...
	I0719 04:35:09.671812  151865 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:35:09.682679  151865 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:35:09.682794  151865 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:35:09.682858  151865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:35:09.698765  151865 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:35:09.698838  151865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 04:35:09.707474  151865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 04:35:09.722824  151865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:35:09.737996  151865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 04:35:09.753411  151865 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:35:09.769703  151865 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:35:09.773346  151865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:35:09.914993  151865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:35:09.929494  151865 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161 for IP: 192.168.39.246
	I0719 04:35:09.929519  151865 certs.go:194] generating shared ca certs ...
	I0719 04:35:09.929541  151865 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:35:09.929734  151865 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:35:09.929785  151865 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:35:09.929796  151865 certs.go:256] generating profile certs ...
	I0719 04:35:09.929907  151865 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key
	I0719 04:35:09.929935  151865 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.e5d4f658
	I0719 04:35:09.929950  151865 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.e5d4f658 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.102 192.168.39.190 192.168.39.254]
	I0719 04:35:10.047641  151865 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.e5d4f658 ...
	I0719 04:35:10.047673  151865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.e5d4f658: {Name:mk89a72b0e2e9fa9b2ea52621e70171d251b7911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:35:10.047847  151865 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.e5d4f658 ...
	I0719 04:35:10.047859  151865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.e5d4f658: {Name:mkea9a4f1a5669869dceecbc30924745027a923d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:35:10.047930  151865 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.e5d4f658 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt
	I0719 04:35:10.048077  151865 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.e5d4f658 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key
	I0719 04:35:10.048207  151865 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key
	I0719 04:35:10.048223  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:35:10.048235  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:35:10.048249  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:35:10.048261  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:35:10.048275  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:35:10.048294  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:35:10.048306  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:35:10.048323  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:35:10.048376  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:35:10.048405  151865 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:35:10.048414  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:35:10.048438  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:35:10.048483  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:35:10.048512  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:35:10.048551  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:35:10.048576  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:35:10.048589  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:35:10.048601  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:35:10.049192  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:35:10.073338  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:35:10.095128  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:35:10.116541  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:35:10.138353  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 04:35:10.159442  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:35:10.180524  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:35:10.201852  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:35:10.223274  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:35:10.245315  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:35:10.266817  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:35:10.289176  151865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:35:10.305055  151865 ssh_runner.go:195] Run: openssl version
	I0719 04:35:10.310807  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:35:10.321049  151865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:35:10.325183  151865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:35:10.325228  151865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:35:10.330487  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:35:10.339428  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:35:10.349420  151865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:35:10.353391  151865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:35:10.353433  151865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:35:10.358669  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:35:10.367835  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:35:10.378177  151865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:35:10.382221  151865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:35:10.382261  151865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:35:10.387375  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:35:10.396014  151865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:35:10.400295  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 04:35:10.405601  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 04:35:10.410768  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 04:35:10.415783  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 04:35:10.421003  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 04:35:10.426122  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 04:35:10.431278  151865 kubeadm.go:392] StartCluster: {Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:35:10.431443  151865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 04:35:10.431490  151865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 04:35:10.469617  151865 cri.go:89] found id: "09474a983e27b34e3673dcf223551088ab64428984deeb2a6ca8b535efe763f7"
	I0719 04:35:10.469641  151865 cri.go:89] found id: "8e2186685ce5385380419621a7d62e66847c580c15f2eb81e3568193d1d88a14"
	I0719 04:35:10.469644  151865 cri.go:89] found id: "9e5dff8dcfc51d728c29b5a44595ae338a5d83270e42b3d1c79d03ce684ae57f"
	I0719 04:35:10.469647  151865 cri.go:89] found id: "045e2b3cfc66b6262fa44a5bd06e4d8e1f9812326318a276daa8b6d80eae81cc"
	I0719 04:35:10.469651  151865 cri.go:89] found id: "f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672"
	I0719 04:35:10.469655  151865 cri.go:89] found id: "14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691"
	I0719 04:35:10.469659  151865 cri.go:89] found id: "1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036"
	I0719 04:35:10.469663  151865 cri.go:89] found id: "6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6"
	I0719 04:35:10.469667  151865 cri.go:89] found id: "ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412"
	I0719 04:35:10.469675  151865 cri.go:89] found id: "eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23"
	I0719 04:35:10.469691  151865 cri.go:89] found id: "b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010"
	I0719 04:35:10.469698  151865 cri.go:89] found id: "6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012"
	I0719 04:35:10.469702  151865 cri.go:89] found id: "882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513"
	I0719 04:35:10.469707  151865 cri.go:89] found id: ""
	I0719 04:35:10.469758  151865 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.663298828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363857663267640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=27e439d6-fc7e-4252-9de9-85a40f8ca1c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.663856568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=344b1cd3-271a-4cf6-b9f4-22960c21fd1e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.663920914Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=344b1cd3-271a-4cf6-b9f4-22960c21fd1e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.664512754Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de38d7f8ad913451255e6229dc934869431b48c5f872bcecb0f3e1a403da4cb4,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721363775087647557,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721363758094491359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721363757089338504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b00712a6d5c4d0880df5fe980d974c4610752b924c5d0dfb834e87567fca9,PodSandboxId:6ca18b08ad5cff45f7e0e989e6f170ffc8941bedaf873f70a71407c84aa34f2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363750196671162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee45f18b02073c7552269415ff4c082be8f7549456304a60fa420eaf656d817,PodSandboxId:1a9981cea564c7986a1621609a2660923a7d1c12bf1212ce32e5c9e49a7b682d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721363733157531296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0da61aa9c7d9fb5aa54fb9d86519c66d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721363717079792588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc,PodSandboxId:9a7e15608cb13a54b49490ee57950e0bf26fe26abc77f21ac40699335603a3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363717094993295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35,PodSandboxId:e3f389f30197bdfddffd259c3e20564e84b4c8d360a171e7fc586e409583883a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721363716928353935,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1961d5e5
da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31,PodSandboxId:b0fca0835d179f0ac31e1ae710482ca32ee75304f4f77e608e8d6c1b15002676,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716874890524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0,PodSandboxId:a066744bcf4f49e5350c8b2feb87f41aa9fca5658ad8ba7b17fbff019ff6fe06,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716985640108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b,PodSandboxId:1c0dbac79d5413baceab0f90d5d10b4817530c9b1715b96109ef52acda220867,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721363716813072324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8,PodSandboxId:3a44777f5a58f71965c80cd1daef31f89b8d60917507f14840d3a8030aa103fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721363716804356643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b
82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721363716792205557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a8
3b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721363716643436478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721363166324688671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015206021542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015144713551,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721363003130730891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721363002828851546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0ca
e4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721362982969032426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1721362982931161392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=344b1cd3-271a-4cf6-b9f4-22960c21fd1e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.705851235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42a093db-5cd2-4ee6-9279-5e2f6999d4cf name=/runtime.v1.RuntimeService/Version
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.705985218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42a093db-5cd2-4ee6-9279-5e2f6999d4cf name=/runtime.v1.RuntimeService/Version
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.707471797Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5043f465-d6eb-4439-a5b3-5861f19684fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.708248838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363857708214950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5043f465-d6eb-4439-a5b3-5861f19684fc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.708844473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac49ea27-5693-4c5f-92ce-4537086cd718 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.708918978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac49ea27-5693-4c5f-92ce-4537086cd718 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.712719047Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de38d7f8ad913451255e6229dc934869431b48c5f872bcecb0f3e1a403da4cb4,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721363775087647557,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721363758094491359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721363757089338504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b00712a6d5c4d0880df5fe980d974c4610752b924c5d0dfb834e87567fca9,PodSandboxId:6ca18b08ad5cff45f7e0e989e6f170ffc8941bedaf873f70a71407c84aa34f2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363750196671162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee45f18b02073c7552269415ff4c082be8f7549456304a60fa420eaf656d817,PodSandboxId:1a9981cea564c7986a1621609a2660923a7d1c12bf1212ce32e5c9e49a7b682d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721363733157531296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0da61aa9c7d9fb5aa54fb9d86519c66d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721363717079792588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc,PodSandboxId:9a7e15608cb13a54b49490ee57950e0bf26fe26abc77f21ac40699335603a3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363717094993295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35,PodSandboxId:e3f389f30197bdfddffd259c3e20564e84b4c8d360a171e7fc586e409583883a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721363716928353935,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1961d5e5
da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31,PodSandboxId:b0fca0835d179f0ac31e1ae710482ca32ee75304f4f77e608e8d6c1b15002676,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716874890524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0,PodSandboxId:a066744bcf4f49e5350c8b2feb87f41aa9fca5658ad8ba7b17fbff019ff6fe06,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716985640108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b,PodSandboxId:1c0dbac79d5413baceab0f90d5d10b4817530c9b1715b96109ef52acda220867,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721363716813072324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8,PodSandboxId:3a44777f5a58f71965c80cd1daef31f89b8d60917507f14840d3a8030aa103fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721363716804356643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b
82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721363716792205557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a8
3b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721363716643436478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721363166324688671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015206021542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015144713551,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721363003130730891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721363002828851546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0ca
e4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721362982969032426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1721362982931161392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac49ea27-5693-4c5f-92ce-4537086cd718 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.752998049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c94c03c4-2102-4b32-a980-cc1d7b050520 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.753076994Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c94c03c4-2102-4b32-a980-cc1d7b050520 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.754398861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee557db4-22b6-4d0b-9cc2-841713441bb3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.755080235Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363857755053891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee557db4-22b6-4d0b-9cc2-841713441bb3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.755461117Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0408492d-ea4b-46fe-abc6-83b86f2f1986 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.755520642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0408492d-ea4b-46fe-abc6-83b86f2f1986 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.756249510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de38d7f8ad913451255e6229dc934869431b48c5f872bcecb0f3e1a403da4cb4,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721363775087647557,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721363758094491359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721363757089338504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b00712a6d5c4d0880df5fe980d974c4610752b924c5d0dfb834e87567fca9,PodSandboxId:6ca18b08ad5cff45f7e0e989e6f170ffc8941bedaf873f70a71407c84aa34f2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363750196671162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee45f18b02073c7552269415ff4c082be8f7549456304a60fa420eaf656d817,PodSandboxId:1a9981cea564c7986a1621609a2660923a7d1c12bf1212ce32e5c9e49a7b682d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721363733157531296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0da61aa9c7d9fb5aa54fb9d86519c66d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721363717079792588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc,PodSandboxId:9a7e15608cb13a54b49490ee57950e0bf26fe26abc77f21ac40699335603a3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363717094993295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35,PodSandboxId:e3f389f30197bdfddffd259c3e20564e84b4c8d360a171e7fc586e409583883a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721363716928353935,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1961d5e5
da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31,PodSandboxId:b0fca0835d179f0ac31e1ae710482ca32ee75304f4f77e608e8d6c1b15002676,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716874890524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0,PodSandboxId:a066744bcf4f49e5350c8b2feb87f41aa9fca5658ad8ba7b17fbff019ff6fe06,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716985640108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b,PodSandboxId:1c0dbac79d5413baceab0f90d5d10b4817530c9b1715b96109ef52acda220867,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721363716813072324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8,PodSandboxId:3a44777f5a58f71965c80cd1daef31f89b8d60917507f14840d3a8030aa103fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721363716804356643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b
82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721363716792205557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a8
3b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721363716643436478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721363166324688671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015206021542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015144713551,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721363003130730891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721363002828851546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0ca
e4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721362982969032426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1721362982931161392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0408492d-ea4b-46fe-abc6-83b86f2f1986 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.800443623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d2ff767-b618-4e52-b7e8-ef9d20a6bbc3 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.800516187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d2ff767-b618-4e52-b7e8-ef9d20a6bbc3 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.801701202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9550e378-402f-4afe-a981-d3dc73cbcba9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.802370224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721363857802344128,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9550e378-402f-4afe-a981-d3dc73cbcba9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.802844069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36536467-6ff5-4a4f-8cfd-f62b6c46d2c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.802894456Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36536467-6ff5-4a4f-8cfd-f62b6c46d2c2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:37:37 ha-925161 crio[3828]: time="2024-07-19 04:37:37.803357574Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de38d7f8ad913451255e6229dc934869431b48c5f872bcecb0f3e1a403da4cb4,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721363775087647557,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721363758094491359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721363757089338504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b00712a6d5c4d0880df5fe980d974c4610752b924c5d0dfb834e87567fca9,PodSandboxId:6ca18b08ad5cff45f7e0e989e6f170ffc8941bedaf873f70a71407c84aa34f2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363750196671162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee45f18b02073c7552269415ff4c082be8f7549456304a60fa420eaf656d817,PodSandboxId:1a9981cea564c7986a1621609a2660923a7d1c12bf1212ce32e5c9e49a7b682d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721363733157531296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0da61aa9c7d9fb5aa54fb9d86519c66d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721363717079792588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc,PodSandboxId:9a7e15608cb13a54b49490ee57950e0bf26fe26abc77f21ac40699335603a3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363717094993295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35,PodSandboxId:e3f389f30197bdfddffd259c3e20564e84b4c8d360a171e7fc586e409583883a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721363716928353935,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1961d5e5
da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31,PodSandboxId:b0fca0835d179f0ac31e1ae710482ca32ee75304f4f77e608e8d6c1b15002676,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716874890524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0,PodSandboxId:a066744bcf4f49e5350c8b2feb87f41aa9fca5658ad8ba7b17fbff019ff6fe06,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716985640108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b,PodSandboxId:1c0dbac79d5413baceab0f90d5d10b4817530c9b1715b96109ef52acda220867,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721363716813072324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8,PodSandboxId:3a44777f5a58f71965c80cd1daef31f89b8d60917507f14840d3a8030aa103fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721363716804356643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b
82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721363716792205557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a8
3b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721363716643436478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721363166324688671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015206021542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015144713551,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721363003130730891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721363002828851546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0ca
e4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721362982969032426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1721362982931161392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36536467-6ff5-4a4f-8cfd-f62b6c46d2c2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	de38d7f8ad913       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   487e1cddacb84       storage-provisioner
	f26b178fe8fc4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   63f316747da1d       kube-controller-manager-ha-925161
	e01d6998bdb35       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   1fa6c47299541       kube-apiserver-ha-925161
	f26b00712a6d5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   6ca18b08ad5cf       busybox-fc5497c4f-xjdg9
	fee45f18b0207       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   1a9981cea564c       kube-vip-ha-925161
	66526fd5cc961       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   9a7e15608cb13       kube-proxy-8dbqt
	458930eb4d222       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   487e1cddacb84       storage-provisioner
	4093158c49f1e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   a066744bcf4f4       coredns-7db6d8ff4d-7wzcg
	76385fe3aa9b3       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      2 minutes ago        Running             kindnet-cni               1                   e3f389f30197b       kindnet-fsr5f
	1961d5e5da1a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   b0fca0835d179       coredns-7db6d8ff4d-hwdsq
	38d19d3723a70       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   1c0dbac79d541       etcd-ha-925161
	3fd2dbf80d04b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   3a44777f5a58f       kube-scheduler-ha-925161
	2acd3b5d4b137       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   63f316747da1d       kube-controller-manager-ha-925161
	31e39f44635db       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   1fa6c47299541       kube-apiserver-ha-925161
	376dac90130c2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago       Exited              busybox                   0                   0d44fb43a7c0f       busybox-fc5497c4f-xjdg9
	f8fbd19dd4d99       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   0bb04d64362d6       coredns-7db6d8ff4d-hwdsq
	14f21e70e6b65       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   62bcd5e2d22cb       coredns-7db6d8ff4d-7wzcg
	1109d10f2b3d4       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      14 minutes ago       Exited              kindnet-cni               0                   b3c277ef1f53b       kindnet-fsr5f
	6c9e12889a166       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago       Exited              kube-proxy                0                   696364d98fd5c       kube-proxy-8dbqt
	eeef22350ca0f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   fa3836c68c71d       kube-scheduler-ha-925161
	b041f48cc90cf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   a03be60cf1fe9       etcd-ha-925161
	
	
	==> coredns [14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691] <==
	[INFO] 10.244.1.2:41971 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00346851s
	[INFO] 10.244.1.2:57720 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114773s
	[INFO] 10.244.2.3:58305 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001754058s
	[INFO] 10.244.2.3:54206 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118435s
	[INFO] 10.244.2.3:37056 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234861s
	[INFO] 10.244.2.3:45425 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073142s
	[INFO] 10.244.0.4:54647 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007602s
	[INFO] 10.244.0.4:33742 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001338144s
	[INFO] 10.244.1.2:58214 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123014s
	[INFO] 10.244.1.2:58591 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083326s
	[INFO] 10.244.1.2:33227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196172s
	[INFO] 10.244.2.3:49582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115766s
	[INFO] 10.244.2.3:46761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109526s
	[INFO] 10.244.0.4:50248 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066399s
	[INFO] 10.244.1.2:45766 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012847s
	[INFO] 10.244.1.2:57759 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145394s
	[INFO] 10.244.2.3:50037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160043s
	[INFO] 10.244.2.3:49469 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075305s
	[INFO] 10.244.2.3:39504 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000057986s
	[INFO] 10.244.0.4:39098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096095s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1961d5e5da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58630->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1233165234]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 04:35:28.338) (total time: 10550ms):
	Trace[1233165234]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58630->10.96.0.1:443: read: connection reset by peer 10550ms (04:35:38.888)
	Trace[1233165234]: [10.550381464s] [10.550381464s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58630->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47410->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47410->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58670->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58670->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0] <==
	Trace[245165649]: [10.001410163s] [10.001410163s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672] <==
	[INFO] 10.244.2.3:48698 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001253504s
	[INFO] 10.244.2.3:45424 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060715s
	[INFO] 10.244.0.4:53435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016485s
	[INFO] 10.244.0.4:47050 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790838s
	[INFO] 10.244.0.4:38074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058109s
	[INFO] 10.244.0.4:53487 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066861s
	[INFO] 10.244.0.4:48230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012907s
	[INFO] 10.244.0.4:45713 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053151s
	[INFO] 10.244.1.2:40224 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119446s
	[INFO] 10.244.2.3:48643 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101063s
	[INFO] 10.244.2.3:59393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008526s
	[INFO] 10.244.0.4:38457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103892s
	[INFO] 10.244.0.4:36242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015645s
	[INFO] 10.244.0.4:47871 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076477s
	[INFO] 10.244.1.2:44263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176905s
	[INFO] 10.244.1.2:56297 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215661s
	[INFO] 10.244.2.3:45341 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148843s
	[INFO] 10.244.0.4:41990 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105346s
	[INFO] 10.244.0.4:43204 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121535s
	[INFO] 10.244.0.4:60972 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251518s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-925161
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_23_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:23:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:37:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:36:20 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:36:20 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:36:20 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:36:20 +0000   Fri, 19 Jul 2024 04:23:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-925161
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff8c87164fa44c4f827d29ad58165cee
	  System UUID:                ff8c8716-4fa4-4c4f-827d-29ad58165cee
	  Boot ID:                    82d231ce-d7a6-41a1-a656-2e7410a6f84c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xjdg9              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-7wzcg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-hwdsq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-925161                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-fsr5f                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-925161             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-925161    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8dbqt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-925161             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-925161                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 97s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-925161 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-925161 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-925161 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-925161 status is now: NodeReady
	  Normal   RegisteredNode           13m                    node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Warning  ContainerGCFailed        2m29s (x2 over 3m29s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           94s                    node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal   RegisteredNode           87s                    node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	
	
	Name:               ha-925161-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_24_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:24:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:37:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:36:40 +0000   Fri, 19 Jul 2024 04:35:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:36:40 +0000   Fri, 19 Jul 2024 04:35:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:36:40 +0000   Fri, 19 Jul 2024 04:35:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:36:40 +0000   Fri, 19 Jul 2024 04:35:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-925161-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9158ff8415464fc08c01f2344e6694f7
	  System UUID:                9158ff84-1546-4fc0-8c01-f2344e6694f7
	  Boot ID:                    f097e6d1-5160-4643-ae17-6e026c47bbf2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5785p                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-925161-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-dkctc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-925161-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-925161-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-s6df4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-925161-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-925161-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 76s                  kube-proxy       
	  Normal  Starting                 13m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)    kubelet          Node ha-925161-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)    kubelet          Node ha-925161-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)    kubelet          Node ha-925161-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           13m                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  NodeNotReady             8m52s                node-controller  Node ha-925161-m02 status is now: NodeNotReady
	  Normal  Starting                 2m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node ha-925161-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node ha-925161-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s (x7 over 2m8s)  kubelet          Node ha-925161-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           94s                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           87s                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           30s                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	
	
	Name:               ha-925161-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_25_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:25:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:37:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:37:11 +0000   Fri, 19 Jul 2024 04:25:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:37:11 +0000   Fri, 19 Jul 2024 04:25:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:37:11 +0000   Fri, 19 Jul 2024 04:25:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:37:11 +0000   Fri, 19 Jul 2024 04:25:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    ha-925161-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3e81f7ca95c24874b7c002cc8e188173
	  System UUID:                3e81f7ca-95c2-4874-b7c0-02cc8e188173
	  Boot ID:                    06eb074d-49ae-4e09-b060-4903bc3e0686
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t2m4d                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-925161-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7gvt6                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-925161-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-925161-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-j6526                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-925161-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-925161-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-925161-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-925161-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-925161-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	  Normal   RegisteredNode           94s                node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  57s (x2 over 58s)  kubelet          Node ha-925161-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    57s (x2 over 58s)  kubelet          Node ha-925161-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     57s (x2 over 58s)  kubelet          Node ha-925161-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  57s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 57s                kubelet          Node ha-925161-m03 has been rebooted, boot id: 06eb074d-49ae-4e09-b060-4903bc3e0686
	  Normal   RegisteredNode           30s                node-controller  Node ha-925161-m03 event: Registered Node ha-925161-m03 in Controller
	
	
	Name:               ha-925161-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_27_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:27:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:37:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:37:29 +0000   Fri, 19 Jul 2024 04:37:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:37:29 +0000   Fri, 19 Jul 2024 04:37:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:37:29 +0000   Fri, 19 Jul 2024 04:37:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:37:29 +0000   Fri, 19 Jul 2024 04:37:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.75
	  Hostname:    ha-925161-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e08274d02fa64707986686183076854f
	  System UUID:                e08274d0-2fa6-4707-9866-86183076854f
	  Boot ID:                    af36be98-8b95-4bf4-abe3-9ae5efece267
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dnwxp       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-f4fgd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-925161-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-925161-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-925161-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   NodeReady                9m49s              kubelet          Node ha-925161-m04 status is now: NodeReady
	  Normal   RegisteredNode           94s                node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   RegisteredNode           87s                node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   NodeNotReady             53s                node-controller  Node ha-925161-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9s (x2 over 9s)    kubelet          Node ha-925161-m04 has been rebooted, boot id: af36be98-8b95-4bf4-abe3-9ae5efece267
	  Normal   NodeHasSufficientMemory  9s (x3 over 9s)    kubelet          Node ha-925161-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x3 over 9s)    kubelet          Node ha-925161-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x3 over 9s)    kubelet          Node ha-925161-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9s                 kubelet          Node ha-925161-m04 status is now: NodeNotReady
	  Normal   NodeReady                9s                 kubelet          Node ha-925161-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +8.442247] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.062592] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054468] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.195426] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.118864] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.257746] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.980513] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[Jul19 04:23] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.065569] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.069928] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.091097] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.840611] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.224120] kauditd_printk_skb: 38 callbacks suppressed
	[Jul19 04:24] kauditd_printk_skb: 26 callbacks suppressed
	[Jul19 04:32] kauditd_printk_skb: 1 callbacks suppressed
	[Jul19 04:35] systemd-fstab-generator[3742]: Ignoring "noauto" option for root device
	[  +0.159179] systemd-fstab-generator[3754]: Ignoring "noauto" option for root device
	[  +0.181418] systemd-fstab-generator[3769]: Ignoring "noauto" option for root device
	[  +0.156782] systemd-fstab-generator[3781]: Ignoring "noauto" option for root device
	[  +0.275585] systemd-fstab-generator[3810]: Ignoring "noauto" option for root device
	[  +0.922859] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +6.529241] kauditd_printk_skb: 127 callbacks suppressed
	[ +16.805791] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.878952] kauditd_printk_skb: 1 callbacks suppressed
	[Jul19 04:36] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b] <==
	{"level":"warn","ts":"2024-07-19T04:36:38.305675Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.190:2380/version","remote-member-id":"29da33e6eb84f18b","error":"Get \"https://192.168.39.190:2380/version\": dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:38.305809Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"29da33e6eb84f18b","error":"Get \"https://192.168.39.190:2380/version\": dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:42.308159Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.190:2380/version","remote-member-id":"29da33e6eb84f18b","error":"Get \"https://192.168.39.190:2380/version\": dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:42.308227Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"29da33e6eb84f18b","error":"Get \"https://192.168.39.190:2380/version\": dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:42.953228Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"29da33e6eb84f18b","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:42.953352Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"29da33e6eb84f18b","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:46.310087Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.190:2380/version","remote-member-id":"29da33e6eb84f18b","error":"Get \"https://192.168.39.190:2380/version\": dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:46.31016Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"29da33e6eb84f18b","error":"Get \"https://192.168.39.190:2380/version\": dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:47.954094Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"29da33e6eb84f18b","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:47.95418Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"29da33e6eb84f18b","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:50.31175Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.190:2380/version","remote-member-id":"29da33e6eb84f18b","error":"Get \"https://192.168.39.190:2380/version\": dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:50.31186Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"29da33e6eb84f18b","error":"Get \"https://192.168.39.190:2380/version\": dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-19T04:36:51.402486Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:36:51.402798Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:36:51.415519Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:36:51.433589Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b19954eb16571c64","to":"29da33e6eb84f18b","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-19T04:36:51.433645Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:36:51.434843Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b19954eb16571c64","to":"29da33e6eb84f18b","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-19T04:36:51.434887Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"warn","ts":"2024-07-19T04:36:52.9542Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"29da33e6eb84f18b","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-19T04:36:52.954336Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"29da33e6eb84f18b","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-19T04:37:33.10659Z","caller":"traceutil/trace.go:171","msg":"trace[352178164] linearizableReadLoop","detail":"{readStateIndex:3106; appliedIndex:3107; }","duration":"132.263544ms","start":"2024-07-19T04:37:32.974286Z","end":"2024-07-19T04:37:33.10655Z","steps":["trace[352178164] 'read index received'  (duration: 132.259346ms)","trace[352178164] 'applied index is now lower than readState.Index'  (duration: 3.288µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:37:33.107559Z","caller":"traceutil/trace.go:171","msg":"trace[1732292203] transaction","detail":"{read_only:false; response_revision:2658; number_of_response:1; }","duration":"134.208009ms","start":"2024-07-19T04:37:32.973333Z","end":"2024-07-19T04:37:33.107541Z","steps":["trace[1732292203] 'process raft request'  (duration: 133.834226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:37:33.111237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.87939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T04:37:33.111413Z","caller":"traceutil/trace.go:171","msg":"trace[949516260] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2658; }","duration":"137.13256ms","start":"2024-07-19T04:37:32.974261Z","end":"2024-07-19T04:37:33.111394Z","steps":["trace[949516260] 'agreement among raft nodes before linearized reading'  (duration: 133.144142ms)"],"step_count":1}
	
	
	==> etcd [b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010] <==
	{"level":"info","ts":"2024-07-19T04:33:36.53321Z","caller":"traceutil/trace.go:171","msg":"trace[296995039] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"519.813015ms","start":"2024-07-19T04:33:36.013393Z","end":"2024-07-19T04:33:36.533207Z","steps":["trace[296995039] 'agreement among raft nodes before linearized reading'  (duration: 500.946689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:33:36.533224Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T04:33:36.013389Z","time spent":"519.830447ms","remote":"127.0.0.1:43672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:10000 "}
	2024/07/19 04:33:36 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T04:33:36.533314Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T04:33:28.998363Z","time spent":"7.534563905s","remote":"127.0.0.1:43502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:10000 "}
	2024/07/19 04:33:36 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T04:33:36.58319Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.246:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:33:36.583399Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.246:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T04:33:36.584651Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b19954eb16571c64","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-19T04:33:36.584875Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.584913Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.584971Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.585098Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.585145Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.58519Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.585228Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.585236Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585245Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585264Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585342Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585457Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585509Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585522Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.587816Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2024-07-19T04:33:36.588095Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2024-07-19T04:33:36.588168Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-925161","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.246:2380"],"advertise-client-urls":["https://192.168.39.246:2379"]}
	
	
	==> kernel <==
	 04:37:38 up 15 min,  0 users,  load average: 0.26, 0.34, 0.22
	Linux ha-925161 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036] <==
	I0719 04:33:14.195172       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:33:14.195289       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:33:14.195455       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:33:14.195511       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:33:14.195578       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:33:14.195597       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:33:14.195664       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:33:14.195683       1 main.go:303] handling current node
	I0719 04:33:24.194969       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:33:24.195082       1 main.go:303] handling current node
	I0719 04:33:24.195111       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:33:24.195128       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:33:24.195275       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:33:24.195364       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:33:24.195498       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:33:24.195522       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:33:34.195409       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:33:34.195671       1 main.go:303] handling current node
	I0719 04:33:34.195716       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:33:34.195738       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:33:34.195996       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:33:34.196056       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:33:34.196146       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:33:34.196180       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	E0719 04:33:35.016562       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35] <==
	I0719 04:37:07.995296       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:37:17.992294       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:37:17.992451       1 main.go:303] handling current node
	I0719 04:37:17.992488       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:37:17.992512       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:37:17.992644       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:37:17.992685       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:37:17.992771       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:37:17.992795       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:37:27.999428       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:37:27.999481       1 main.go:303] handling current node
	I0719 04:37:27.999501       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:37:27.999510       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:37:27.999675       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:37:27.999709       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:37:27.999791       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:37:27.999825       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:37:37.997259       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:37:37.997330       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:37:37.997479       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:37:37.997500       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:37:37.997580       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:37:37.997599       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:37:37.997670       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:37:37.997690       1 main.go:303] handling current node
	
	
	==> kube-apiserver [31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3] <==
	I0719 04:35:17.198245       1 options.go:221] external host was not specified, using 192.168.39.246
	I0719 04:35:17.201652       1 server.go:148] Version: v1.30.3
	I0719 04:35:17.201698       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:35:17.871615       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0719 04:35:17.871779       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 04:35:17.877034       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0719 04:35:17.877103       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0719 04:35:17.877280       1 instance.go:299] Using reconciler: lease
	W0719 04:35:37.861886       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0719 04:35:37.862153       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0719 04:35:37.877886       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	W0719 04:35:37.879147       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	
	
	==> kube-apiserver [e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c] <==
	I0719 04:35:59.424158       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0719 04:35:59.424218       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0719 04:35:59.489014       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 04:35:59.489468       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 04:35:59.489841       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 04:35:59.490215       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 04:35:59.490279       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 04:35:59.490305       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 04:35:59.495696       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0719 04:35:59.500112       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.190]
	I0719 04:35:59.525181       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 04:35:59.525241       1 aggregator.go:165] initial CRD sync complete...
	I0719 04:35:59.525269       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 04:35:59.525275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 04:35:59.525280       1 cache.go:39] Caches are synced for autoregister controller
	I0719 04:35:59.540221       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 04:35:59.549992       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 04:35:59.550062       1 policy_source.go:224] refreshing policies
	I0719 04:35:59.587686       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 04:35:59.601286       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 04:35:59.613239       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0719 04:35:59.629179       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0719 04:36:00.395459       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0719 04:36:00.786621       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.190 192.168.39.246]
	W0719 04:36:10.762057       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.246]
	
	
	==> kube-controller-manager [2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d] <==
	I0719 04:35:18.313763       1 serving.go:380] Generated self-signed cert in-memory
	I0719 04:35:18.883345       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0719 04:35:18.883378       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:35:18.885035       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0719 04:35:18.885171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0719 04:35:18.885340       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0719 04:35:18.885613       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0719 04:35:38.887384       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.246:8443/healthz\": dial tcp 192.168.39.246:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d] <==
	I0719 04:36:11.802980       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0719 04:36:11.819428       1 shared_informer.go:320] Caches are synced for disruption
	I0719 04:36:11.892930       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0719 04:36:11.898883       1 shared_informer.go:320] Caches are synced for crt configmap
	I0719 04:36:11.902399       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 04:36:11.924971       1 shared_informer.go:320] Caches are synced for endpoint
	I0719 04:36:11.947891       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 04:36:11.954005       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 04:36:12.352749       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 04:36:12.352786       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 04:36:12.416361       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 04:36:18.885324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="120.387µs"
	I0719 04:36:24.391315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.577343ms"
	I0719 04:36:24.391443       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.297µs"
	I0719 04:36:24.952737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.346415ms"
	I0719 04:36:24.952971       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="128.476µs"
	I0719 04:36:25.004080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.433439ms"
	I0719 04:36:25.009772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.95µs"
	I0719 04:36:25.004805       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"bf80abb4-ed25-4705-9ba0-a41070aade7e", APIVersion:"v1", ResourceVersion:"260", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-wr6n8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-wr6n8": the object has been modified; please apply your changes to the latest version and try again
	I0719 04:36:25.004875       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-wr6n8 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-wr6n8\": the object has been modified; please apply your changes to the latest version and try again"
	I0719 04:36:42.129267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.32426ms"
	I0719 04:36:42.131178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.061µs"
	I0719 04:37:02.407559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.812043ms"
	I0719 04:37:02.407689       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.545µs"
	I0719 04:37:29.762774       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-925161-m04"
	
	
	==> kube-proxy [66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc] <==
	I0719 04:35:18.007601       1 server_linux.go:69] "Using iptables proxy"
	E0719 04:35:20.767528       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:35:23.840052       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:35:26.912606       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:35:33.056709       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:35:42.272344       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:36:00.704659       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0719 04:36:00.704844       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0719 04:36:00.764479       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:36:00.764687       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:36:00.764714       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:36:00.783533       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:36:00.783999       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:36:00.784248       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:36:00.789648       1 config.go:192] "Starting service config controller"
	I0719 04:36:00.789748       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:36:00.789842       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:36:00.789858       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:36:00.791772       1 config.go:319] "Starting node config controller"
	I0719 04:36:00.791795       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:36:00.890546       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:36:00.890628       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:36:00.892091       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6] <==
	E0719 04:32:25.663836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:28.735343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:28.735402       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:28.735478       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:28.735509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:28.735486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:28.735581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:34.880493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:34.880652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:34.880879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:34.881060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:34.881270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:34.881358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:44.095578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:44.096848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:47.167690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:47.167793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:47.167983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:47.168024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:59.455569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:59.455692       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:33:05.600409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:33:05.600495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:33:05.600754       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:33:05.600819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8] <==
	W0719 04:35:54.046465       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.246:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0719 04:35:54.046527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.246:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	W0719 04:35:55.156494       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0719 04:35:55.156562       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	W0719 04:35:56.409684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0719 04:35:56.409742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	W0719 04:35:56.667802       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.246:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0719 04:35:56.667850       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.246:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	W0719 04:35:56.852873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0719 04:35:56.852922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	W0719 04:35:59.431782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 04:35:59.431843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 04:35:59.431908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:35:59.431968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:35:59.432032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:35:59.432058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:35:59.432094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 04:35:59.432117       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 04:35:59.432162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 04:35:59.432185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 04:35:59.432262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 04:35:59.432288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:35:59.432337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 04:35:59.432371       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0719 04:36:10.304245       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23] <==
	W0719 04:33:30.107914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:33:30.107976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:33:30.241858       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 04:33:30.242045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:33:30.322760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 04:33:30.322793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 04:33:30.538544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 04:33:30.538680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 04:33:30.567093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:33:30.567128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:33:30.608485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 04:33:30.608580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 04:33:30.799999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 04:33:30.800041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 04:33:31.140189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 04:33:31.140220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 04:33:31.216854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 04:33:31.216925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 04:33:31.302828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 04:33:31.303003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 04:33:31.332417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:33:31.332660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:33:36.469087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:33:36.469124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:33:36.500702       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 19 04:35:51 ha-925161 kubelet[1377]: I0719 04:35:51.488004    1377 status_manager.go:853] "Failed to get status for pod" podUID="36cca920f3f48d0fa2da37f2a22f12ba" pod="kube-system/etcd-ha-925161" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 19 04:35:54 ha-925161 kubelet[1377]: I0719 04:35:54.559292    1377 status_manager.go:853] "Failed to get status for pod" podUID="7c423aaede6d00f00e13551d35c79c4b" pod="kube-system/kube-apiserver-ha-925161" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 19 04:35:54 ha-925161 kubelet[1377]: E0719 04:35:54.559284    1377 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-925161.17e382fbadbc51e0  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-925161,UID:7c423aaede6d00f00e13551d35c79c4b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-925161,},FirstTimestamp:2024-07-19 04:31:40.048863712 +0000 UTC m=+511.080987736,LastTimestamp:2024-07-19 04:31:40.048863712 +0000 UTC m=+511.080987736,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related
:nil,ReportingController:kubelet,ReportingInstance:ha-925161,}"
	Jul 19 04:35:57 ha-925161 kubelet[1377]: I0719 04:35:57.077868    1377 scope.go:117] "RemoveContainer" containerID="31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3"
	Jul 19 04:35:57 ha-925161 kubelet[1377]: I0719 04:35:57.631375    1377 status_manager.go:853] "Failed to get status for pod" podUID="349099d3ab7836a83b145a30eb9936d6" pod="kube-system/kube-controller-manager-ha-925161" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 19 04:35:58 ha-925161 kubelet[1377]: I0719 04:35:58.078376    1377 scope.go:117] "RemoveContainer" containerID="2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d"
	Jul 19 04:36:00 ha-925161 kubelet[1377]: I0719 04:36:00.077589    1377 scope.go:117] "RemoveContainer" containerID="458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d"
	Jul 19 04:36:00 ha-925161 kubelet[1377]: E0719 04:36:00.077786    1377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bf27da3d-f736-4742-9af5-2c0a024075ec)\"" pod="kube-system/storage-provisioner" podUID="bf27da3d-f736-4742-9af5-2c0a024075ec"
	Jul 19 04:36:00 ha-925161 kubelet[1377]: E0719 04:36:00.703377    1377 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-925161\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 19 04:36:00 ha-925161 kubelet[1377]: E0719 04:36:00.703586    1377 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-925161?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 19 04:36:00 ha-925161 kubelet[1377]: I0719 04:36:00.703843    1377 status_manager.go:853] "Failed to get status for pod" podUID="cd11aac3-62df-4603-8102-3384bcc100f1" pod="kube-system/kube-proxy-8dbqt" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8dbqt\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 19 04:36:09 ha-925161 kubelet[1377]: E0719 04:36:09.125309    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:36:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:36:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:36:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:36:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:36:09 ha-925161 kubelet[1377]: I0719 04:36:09.155888    1377 scope.go:117] "RemoveContainer" containerID="045e2b3cfc66b6262fa44a5bd06e4d8e1f9812326318a276daa8b6d80eae81cc"
	Jul 19 04:36:15 ha-925161 kubelet[1377]: I0719 04:36:15.078514    1377 scope.go:117] "RemoveContainer" containerID="458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d"
	Jul 19 04:36:37 ha-925161 kubelet[1377]: I0719 04:36:37.078468    1377 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-925161" podUID="8d01a874-336e-476c-b079-852250b3bbcd"
	Jul 19 04:36:37 ha-925161 kubelet[1377]: I0719 04:36:37.097148    1377 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-925161"
	Jul 19 04:37:09 ha-925161 kubelet[1377]: E0719 04:37:09.119326    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:37:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:37:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:37:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:37:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 04:37:37.377733  153278 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19302-122995/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-925161 -n ha-925161
helpers_test.go:261: (dbg) Run:  kubectl --context ha-925161 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (365.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 stop -v=7 --alsologtostderr: exit status 82 (2m0.463544256s)

                                                
                                                
-- stdout --
	* Stopping node "ha-925161-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:37:56.998895  153690 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:37:56.999142  153690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:37:56.999151  153690 out.go:304] Setting ErrFile to fd 2...
	I0719 04:37:56.999155  153690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:37:56.999321  153690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:37:56.999539  153690 out.go:298] Setting JSON to false
	I0719 04:37:56.999615  153690 mustload.go:65] Loading cluster: ha-925161
	I0719 04:37:56.999939  153690 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:37:57.000020  153690 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:37:57.000196  153690 mustload.go:65] Loading cluster: ha-925161
	I0719 04:37:57.000316  153690 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:37:57.000344  153690 stop.go:39] StopHost: ha-925161-m04
	I0719 04:37:57.000716  153690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:37:57.000758  153690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:37:57.016265  153690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37685
	I0719 04:37:57.016756  153690 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:37:57.017485  153690 main.go:141] libmachine: Using API Version  1
	I0719 04:37:57.017517  153690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:37:57.017957  153690 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:37:57.020231  153690 out.go:177] * Stopping node "ha-925161-m04"  ...
	I0719 04:37:57.021488  153690 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 04:37:57.021517  153690 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:37:57.021761  153690 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 04:37:57.021809  153690 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:37:57.024795  153690 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:37:57.025242  153690 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:37:23 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:37:57.025278  153690 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:37:57.025425  153690 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:37:57.025624  153690 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:37:57.025770  153690 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:37:57.025879  153690 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	I0719 04:37:57.106817  153690 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 04:37:57.160571  153690 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 04:37:57.212253  153690 main.go:141] libmachine: Stopping "ha-925161-m04"...
	I0719 04:37:57.212297  153690 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:37:57.213718  153690 main.go:141] libmachine: (ha-925161-m04) Calling .Stop
	I0719 04:37:57.217518  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 0/120
	I0719 04:37:58.218938  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 1/120
	I0719 04:37:59.220433  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 2/120
	I0719 04:38:00.221733  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 3/120
	I0719 04:38:01.223720  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 4/120
	I0719 04:38:02.225678  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 5/120
	I0719 04:38:03.227742  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 6/120
	I0719 04:38:04.229337  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 7/120
	I0719 04:38:05.230662  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 8/120
	I0719 04:38:06.232039  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 9/120
	I0719 04:38:07.234186  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 10/120
	I0719 04:38:08.235389  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 11/120
	I0719 04:38:09.236696  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 12/120
	I0719 04:38:10.238107  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 13/120
	I0719 04:38:11.240320  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 14/120
	I0719 04:38:12.242329  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 15/120
	I0719 04:38:13.243692  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 16/120
	I0719 04:38:14.245300  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 17/120
	I0719 04:38:15.246673  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 18/120
	I0719 04:38:16.248408  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 19/120
	I0719 04:38:17.250487  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 20/120
	I0719 04:38:18.251874  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 21/120
	I0719 04:38:19.253907  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 22/120
	I0719 04:38:20.255519  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 23/120
	I0719 04:38:21.257196  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 24/120
	I0719 04:38:22.259029  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 25/120
	I0719 04:38:23.260444  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 26/120
	I0719 04:38:24.261858  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 27/120
	I0719 04:38:25.263297  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 28/120
	I0719 04:38:26.264836  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 29/120
	I0719 04:38:27.266969  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 30/120
	I0719 04:38:28.268984  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 31/120
	I0719 04:38:29.270450  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 32/120
	I0719 04:38:30.271852  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 33/120
	I0719 04:38:31.273322  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 34/120
	I0719 04:38:32.275301  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 35/120
	I0719 04:38:33.276622  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 36/120
	I0719 04:38:34.277971  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 37/120
	I0719 04:38:35.279458  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 38/120
	I0719 04:38:36.280691  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 39/120
	I0719 04:38:37.282817  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 40/120
	I0719 04:38:38.284278  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 41/120
	I0719 04:38:39.285483  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 42/120
	I0719 04:38:40.287674  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 43/120
	I0719 04:38:41.289406  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 44/120
	I0719 04:38:42.291384  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 45/120
	I0719 04:38:43.292841  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 46/120
	I0719 04:38:44.294369  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 47/120
	I0719 04:38:45.295762  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 48/120
	I0719 04:38:46.297418  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 49/120
	I0719 04:38:47.299678  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 50/120
	I0719 04:38:48.301244  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 51/120
	I0719 04:38:49.303445  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 52/120
	I0719 04:38:50.304650  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 53/120
	I0719 04:38:51.306153  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 54/120
	I0719 04:38:52.308194  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 55/120
	I0719 04:38:53.309455  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 56/120
	I0719 04:38:54.310668  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 57/120
	I0719 04:38:55.312056  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 58/120
	I0719 04:38:56.313639  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 59/120
	I0719 04:38:57.315358  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 60/120
	I0719 04:38:58.317422  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 61/120
	I0719 04:38:59.318796  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 62/120
	I0719 04:39:00.320469  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 63/120
	I0719 04:39:01.322388  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 64/120
	I0719 04:39:02.324217  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 65/120
	I0719 04:39:03.326378  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 66/120
	I0719 04:39:04.327646  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 67/120
	I0719 04:39:05.328997  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 68/120
	I0719 04:39:06.330269  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 69/120
	I0719 04:39:07.332010  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 70/120
	I0719 04:39:08.333608  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 71/120
	I0719 04:39:09.335394  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 72/120
	I0719 04:39:10.336914  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 73/120
	I0719 04:39:11.338323  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 74/120
	I0719 04:39:12.340273  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 75/120
	I0719 04:39:13.341626  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 76/120
	I0719 04:39:14.343386  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 77/120
	I0719 04:39:15.344753  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 78/120
	I0719 04:39:16.346121  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 79/120
	I0719 04:39:17.347556  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 80/120
	I0719 04:39:18.349096  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 81/120
	I0719 04:39:19.350420  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 82/120
	I0719 04:39:20.352177  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 83/120
	I0719 04:39:21.354020  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 84/120
	I0719 04:39:22.355431  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 85/120
	I0719 04:39:23.356818  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 86/120
	I0719 04:39:24.358212  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 87/120
	I0719 04:39:25.360408  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 88/120
	I0719 04:39:26.361829  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 89/120
	I0719 04:39:27.364272  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 90/120
	I0719 04:39:28.365723  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 91/120
	I0719 04:39:29.367004  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 92/120
	I0719 04:39:30.369027  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 93/120
	I0719 04:39:31.370613  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 94/120
	I0719 04:39:32.372124  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 95/120
	I0719 04:39:33.373616  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 96/120
	I0719 04:39:34.374892  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 97/120
	I0719 04:39:35.376264  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 98/120
	I0719 04:39:36.377927  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 99/120
	I0719 04:39:37.380088  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 100/120
	I0719 04:39:38.381836  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 101/120
	I0719 04:39:39.383293  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 102/120
	I0719 04:39:40.384782  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 103/120
	I0719 04:39:41.385975  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 104/120
	I0719 04:39:42.387842  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 105/120
	I0719 04:39:43.389085  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 106/120
	I0719 04:39:44.390482  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 107/120
	I0719 04:39:45.391891  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 108/120
	I0719 04:39:46.393349  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 109/120
	I0719 04:39:47.395549  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 110/120
	I0719 04:39:48.397243  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 111/120
	I0719 04:39:49.399294  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 112/120
	I0719 04:39:50.400667  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 113/120
	I0719 04:39:51.402131  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 114/120
	I0719 04:39:52.403824  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 115/120
	I0719 04:39:53.405256  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 116/120
	I0719 04:39:54.406730  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 117/120
	I0719 04:39:55.408577  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 118/120
	I0719 04:39:56.410658  153690 main.go:141] libmachine: (ha-925161-m04) Waiting for machine to stop 119/120
	I0719 04:39:57.411242  153690 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0719 04:39:57.411305  153690 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0719 04:39:57.413434  153690 out.go:177] 
	W0719 04:39:57.414751  153690 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0719 04:39:57.414767  153690 out.go:239] * 
	* 
	W0719 04:39:57.417296  153690 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 04:39:57.418516  153690 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-925161 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr: exit status 3 (18.934069944s)

                                                
                                                
-- stdout --
	ha-925161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925161-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:39:57.465508  154096 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:39:57.465758  154096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:39:57.465767  154096 out.go:304] Setting ErrFile to fd 2...
	I0719 04:39:57.465771  154096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:39:57.465958  154096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:39:57.466157  154096 out.go:298] Setting JSON to false
	I0719 04:39:57.466199  154096 mustload.go:65] Loading cluster: ha-925161
	I0719 04:39:57.466258  154096 notify.go:220] Checking for updates...
	I0719 04:39:57.466604  154096 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:39:57.466622  154096 status.go:255] checking status of ha-925161 ...
	I0719 04:39:57.466994  154096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:39:57.467054  154096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:39:57.482681  154096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0719 04:39:57.483195  154096 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:39:57.483893  154096 main.go:141] libmachine: Using API Version  1
	I0719 04:39:57.483922  154096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:39:57.484290  154096 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:39:57.484515  154096 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:39:57.486093  154096 status.go:330] ha-925161 host status = "Running" (err=<nil>)
	I0719 04:39:57.486111  154096 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:39:57.486425  154096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:39:57.486475  154096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:39:57.502467  154096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0719 04:39:57.502929  154096 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:39:57.503434  154096 main.go:141] libmachine: Using API Version  1
	I0719 04:39:57.503468  154096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:39:57.503828  154096 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:39:57.504030  154096 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:39:57.507044  154096 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:39:57.507569  154096 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:39:57.507612  154096 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:39:57.507908  154096 host.go:66] Checking if "ha-925161" exists ...
	I0719 04:39:57.508364  154096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:39:57.508422  154096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:39:57.524418  154096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45489
	I0719 04:39:57.524830  154096 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:39:57.525417  154096 main.go:141] libmachine: Using API Version  1
	I0719 04:39:57.525445  154096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:39:57.525828  154096 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:39:57.526047  154096 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:39:57.526287  154096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:39:57.526331  154096 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:39:57.529387  154096 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:39:57.529817  154096 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:39:57.529854  154096 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:39:57.530014  154096 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:39:57.530197  154096 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:39:57.530347  154096 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:39:57.530708  154096 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:39:57.619341  154096 ssh_runner.go:195] Run: systemctl --version
	I0719 04:39:57.626399  154096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:39:57.643034  154096 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:39:57.643065  154096 api_server.go:166] Checking apiserver status ...
	I0719 04:39:57.643108  154096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:39:57.659578  154096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5033/cgroup
	W0719 04:39:57.670432  154096 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5033/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:39:57.670485  154096 ssh_runner.go:195] Run: ls
	I0719 04:39:57.674913  154096 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:39:57.679911  154096 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:39:57.679938  154096 status.go:422] ha-925161 apiserver status = Running (err=<nil>)
	I0719 04:39:57.679950  154096 status.go:257] ha-925161 status: &{Name:ha-925161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:39:57.679969  154096 status.go:255] checking status of ha-925161-m02 ...
	I0719 04:39:57.680284  154096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:39:57.680319  154096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:39:57.695783  154096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
	I0719 04:39:57.696212  154096 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:39:57.696786  154096 main.go:141] libmachine: Using API Version  1
	I0719 04:39:57.696815  154096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:39:57.697229  154096 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:39:57.697431  154096 main.go:141] libmachine: (ha-925161-m02) Calling .GetState
	I0719 04:39:57.699964  154096 status.go:330] ha-925161-m02 host status = "Running" (err=<nil>)
	I0719 04:39:57.699987  154096 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:39:57.700391  154096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:39:57.700434  154096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:39:57.715702  154096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0719 04:39:57.716229  154096 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:39:57.716691  154096 main.go:141] libmachine: Using API Version  1
	I0719 04:39:57.716713  154096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:39:57.717040  154096 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:39:57.717224  154096 main.go:141] libmachine: (ha-925161-m02) Calling .GetIP
	I0719 04:39:57.720217  154096 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:39:57.720803  154096 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:35:20 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:39:57.720825  154096 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:39:57.721041  154096 host.go:66] Checking if "ha-925161-m02" exists ...
	I0719 04:39:57.721344  154096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:39:57.721387  154096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:39:57.736376  154096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36675
	I0719 04:39:57.736742  154096 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:39:57.737216  154096 main.go:141] libmachine: Using API Version  1
	I0719 04:39:57.737239  154096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:39:57.737641  154096 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:39:57.737833  154096 main.go:141] libmachine: (ha-925161-m02) Calling .DriverName
	I0719 04:39:57.738035  154096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:39:57.738052  154096 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHHostname
	I0719 04:39:57.740881  154096 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:39:57.741506  154096 main.go:141] libmachine: (ha-925161-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:17:48:0b", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:35:20 +0000 UTC Type:0 Mac:52:54:00:17:48:0b Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-925161-m02 Clientid:01:52:54:00:17:48:0b}
	I0719 04:39:57.741533  154096 main.go:141] libmachine: (ha-925161-m02) DBG | domain ha-925161-m02 has defined IP address 192.168.39.102 and MAC address 52:54:00:17:48:0b in network mk-ha-925161
	I0719 04:39:57.741711  154096 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHPort
	I0719 04:39:57.741895  154096 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHKeyPath
	I0719 04:39:57.742083  154096 main.go:141] libmachine: (ha-925161-m02) Calling .GetSSHUsername
	I0719 04:39:57.742234  154096 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m02/id_rsa Username:docker}
	I0719 04:39:57.835697  154096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:39:57.852409  154096 kubeconfig.go:125] found "ha-925161" server: "https://192.168.39.254:8443"
	I0719 04:39:57.852447  154096 api_server.go:166] Checking apiserver status ...
	I0719 04:39:57.852489  154096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:39:57.867432  154096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup
	W0719 04:39:57.877130  154096 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1366/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:39:57.877181  154096 ssh_runner.go:195] Run: ls
	I0719 04:39:57.881013  154096 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0719 04:39:57.885743  154096 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0719 04:39:57.885763  154096 status.go:422] ha-925161-m02 apiserver status = Running (err=<nil>)
	I0719 04:39:57.885772  154096 status.go:257] ha-925161-m02 status: &{Name:ha-925161-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:39:57.885787  154096 status.go:255] checking status of ha-925161-m04 ...
	I0719 04:39:57.886118  154096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:39:57.886153  154096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:39:57.901009  154096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44679
	I0719 04:39:57.901475  154096 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:39:57.901932  154096 main.go:141] libmachine: Using API Version  1
	I0719 04:39:57.901953  154096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:39:57.902339  154096 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:39:57.902555  154096 main.go:141] libmachine: (ha-925161-m04) Calling .GetState
	I0719 04:39:57.904125  154096 status.go:330] ha-925161-m04 host status = "Running" (err=<nil>)
	I0719 04:39:57.904142  154096 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:39:57.904416  154096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:39:57.904457  154096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:39:57.919380  154096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43403
	I0719 04:39:57.919775  154096 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:39:57.920299  154096 main.go:141] libmachine: Using API Version  1
	I0719 04:39:57.920332  154096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:39:57.920705  154096 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:39:57.920965  154096 main.go:141] libmachine: (ha-925161-m04) Calling .GetIP
	I0719 04:39:57.923624  154096 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:39:57.924088  154096 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:37:23 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:39:57.924117  154096 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:39:57.924271  154096 host.go:66] Checking if "ha-925161-m04" exists ...
	I0719 04:39:57.924559  154096 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:39:57.924605  154096 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:39:57.939367  154096 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33903
	I0719 04:39:57.939828  154096 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:39:57.940570  154096 main.go:141] libmachine: Using API Version  1
	I0719 04:39:57.940596  154096 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:39:57.940913  154096 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:39:57.941181  154096 main.go:141] libmachine: (ha-925161-m04) Calling .DriverName
	I0719 04:39:57.941429  154096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:39:57.941455  154096 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHHostname
	I0719 04:39:57.944005  154096 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:39:57.944416  154096 main.go:141] libmachine: (ha-925161-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cb:a2:b6", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:37:23 +0000 UTC Type:0 Mac:52:54:00:cb:a2:b6 Iaid: IPaddr:192.168.39.75 Prefix:24 Hostname:ha-925161-m04 Clientid:01:52:54:00:cb:a2:b6}
	I0719 04:39:57.944444  154096 main.go:141] libmachine: (ha-925161-m04) DBG | domain ha-925161-m04 has defined IP address 192.168.39.75 and MAC address 52:54:00:cb:a2:b6 in network mk-ha-925161
	I0719 04:39:57.944558  154096 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHPort
	I0719 04:39:57.944722  154096 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHKeyPath
	I0719 04:39:57.944900  154096 main.go:141] libmachine: (ha-925161-m04) Calling .GetSSHUsername
	I0719 04:39:57.945043  154096 sshutil.go:53] new ssh client: &{IP:192.168.39.75 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161-m04/id_rsa Username:docker}
	W0719 04:40:16.353261  154096 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.75:22: connect: no route to host
	W0719 04:40:16.353375  154096 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.75:22: connect: no route to host
	E0719 04:40:16.353399  154096 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.75:22: connect: no route to host
	I0719 04:40:16.353410  154096 status.go:257] ha-925161-m04 status: &{Name:ha-925161-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0719 04:40:16.353435  154096 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.75:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-925161 -n ha-925161
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-925161 logs -n 25: (1.638740197s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-925161 ssh -n ha-925161-m02 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04:/home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m04 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp testdata/cp-test.txt                                                | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3159028946/001/cp-test_ha-925161-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161:/home/docker/cp-test_ha-925161-m04_ha-925161.txt                       |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161 sudo cat                                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161.txt                                 |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m02:/home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m02 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m03:/home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n                                                                 | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | ha-925161-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-925161 ssh -n ha-925161-m03 sudo cat                                          | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:28 UTC |
	|         | /home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-925161 node stop m02 -v=7                                                     | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-925161 node start m02 -v=7                                                    | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:30 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-925161 -v=7                                                           | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-925161 -v=7                                                                | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:31 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-925161 --wait=true -v=7                                                    | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:33 UTC | 19 Jul 24 04:37 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-925161                                                                | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:37 UTC |                     |
	| node    | ha-925161 node delete m03 -v=7                                                   | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:37 UTC | 19 Jul 24 04:37 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-925161 stop -v=7                                                              | ha-925161 | jenkins | v1.33.1 | 19 Jul 24 04:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:33:35
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:33:35.692545  151865 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:33:35.692686  151865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:33:35.692697  151865 out.go:304] Setting ErrFile to fd 2...
	I0719 04:33:35.692703  151865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:33:35.693141  151865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:33:35.693745  151865 out.go:298] Setting JSON to false
	I0719 04:33:35.694657  151865 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":8159,"bootTime":1721355457,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 04:33:35.694716  151865 start.go:139] virtualization: kvm guest
	I0719 04:33:35.697285  151865 out.go:177] * [ha-925161] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 04:33:35.699142  151865 notify.go:220] Checking for updates...
	I0719 04:33:35.699161  151865 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:33:35.700706  151865 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:33:35.702204  151865 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:33:35.703629  151865 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:33:35.704810  151865 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 04:33:35.705933  151865 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:33:35.707522  151865 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:33:35.707656  151865 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:33:35.708094  151865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:33:35.708142  151865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:33:35.723581  151865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0719 04:33:35.724070  151865 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:33:35.724608  151865 main.go:141] libmachine: Using API Version  1
	I0719 04:33:35.724629  151865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:33:35.725037  151865 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:33:35.725283  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:33:35.760630  151865 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 04:33:35.761871  151865 start.go:297] selected driver: kvm2
	I0719 04:33:35.761891  151865 start.go:901] validating driver "kvm2" against &{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:33:35.762052  151865 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:33:35.762386  151865 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:33:35.762462  151865 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 04:33:35.777973  151865 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 04:33:35.778592  151865 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:33:35.778632  151865 cni.go:84] Creating CNI manager for ""
	I0719 04:33:35.778637  151865 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 04:33:35.778686  151865 start.go:340] cluster config:
	{Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:33:35.778787  151865 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:33:35.780926  151865 out.go:177] * Starting "ha-925161" primary control-plane node in "ha-925161" cluster
	I0719 04:33:35.782331  151865 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:33:35.782369  151865 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 04:33:35.782377  151865 cache.go:56] Caching tarball of preloaded images
	I0719 04:33:35.782461  151865 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:33:35.782473  151865 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:33:35.782575  151865 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/config.json ...
	I0719 04:33:35.782767  151865 start.go:360] acquireMachinesLock for ha-925161: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:33:35.782824  151865 start.go:364] duration metric: took 24.807µs to acquireMachinesLock for "ha-925161"
	I0719 04:33:35.782845  151865 start.go:96] Skipping create...Using existing machine configuration
	I0719 04:33:35.782853  151865 fix.go:54] fixHost starting: 
	I0719 04:33:35.783112  151865 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:33:35.783136  151865 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:33:35.797552  151865 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0719 04:33:35.797951  151865 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:33:35.798497  151865 main.go:141] libmachine: Using API Version  1
	I0719 04:33:35.798516  151865 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:33:35.798911  151865 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:33:35.799154  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:33:35.799324  151865 main.go:141] libmachine: (ha-925161) Calling .GetState
	I0719 04:33:35.800875  151865 fix.go:112] recreateIfNeeded on ha-925161: state=Running err=<nil>
	W0719 04:33:35.800900  151865 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 04:33:35.802896  151865 out.go:177] * Updating the running kvm2 "ha-925161" VM ...
	I0719 04:33:35.804152  151865 machine.go:94] provisionDockerMachine start ...
	I0719 04:33:35.804172  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:33:35.804372  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:35.807109  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:35.807552  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:35.807584  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:35.807725  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:35.807894  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:35.808059  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:35.808198  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:35.808382  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:35.808567  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:33:35.808578  151865 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:33:35.926803  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161
	
	I0719 04:33:35.926885  151865 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:33:35.927210  151865 buildroot.go:166] provisioning hostname "ha-925161"
	I0719 04:33:35.927236  151865 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:33:35.927449  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:35.929933  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:35.930288  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:35.930319  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:35.930518  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:35.930691  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:35.930821  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:35.930971  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:35.931113  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:35.931311  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:33:35.931327  151865 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-925161 && echo "ha-925161" | sudo tee /etc/hostname
	I0719 04:33:36.061811  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-925161
	
	I0719 04:33:36.061854  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:36.064593  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.064981  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.065011  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.065247  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:36.065452  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.065608  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.065721  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:36.065851  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:36.066017  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:33:36.066033  151865 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-925161' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-925161/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-925161' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:33:36.177916  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:33:36.177955  151865 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:33:36.177995  151865 buildroot.go:174] setting up certificates
	I0719 04:33:36.178010  151865 provision.go:84] configureAuth start
	I0719 04:33:36.178028  151865 main.go:141] libmachine: (ha-925161) Calling .GetMachineName
	I0719 04:33:36.178382  151865 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:33:36.180893  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.181268  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.181309  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.181462  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:36.183623  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.184017  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.184039  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.184212  151865 provision.go:143] copyHostCerts
	I0719 04:33:36.184256  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:33:36.184307  151865 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:33:36.184319  151865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:33:36.184414  151865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:33:36.184515  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:33:36.184541  151865 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:33:36.184548  151865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:33:36.184590  151865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:33:36.184651  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:33:36.184673  151865 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:33:36.184681  151865 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:33:36.184713  151865 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:33:36.184775  151865 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.ha-925161 san=[127.0.0.1 192.168.39.246 ha-925161 localhost minikube]
	I0719 04:33:36.234680  151865 provision.go:177] copyRemoteCerts
	I0719 04:33:36.234742  151865 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:33:36.234767  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:36.237251  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.237570  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.237594  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.237769  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:36.237947  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.238087  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:36.238221  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:33:36.323251  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:33:36.323330  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0719 04:33:36.346931  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:33:36.347034  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:33:36.370183  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:33:36.370266  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:33:36.392967  151865 provision.go:87] duration metric: took 214.93921ms to configureAuth
	I0719 04:33:36.392993  151865 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:33:36.393264  151865 config.go:182] Loaded profile config "ha-925161": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:33:36.393367  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:33:36.395947  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.396474  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:33:36.396506  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:33:36.396794  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:33:36.397012  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.397235  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:33:36.397386  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:33:36.397557  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:36.397752  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:33:36.397769  151865 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:35:07.347124  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:35:07.347170  151865 machine.go:97] duration metric: took 1m31.54300109s to provisionDockerMachine
	I0719 04:35:07.347186  151865 start.go:293] postStartSetup for "ha-925161" (driver="kvm2")
	I0719 04:35:07.347212  151865 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:35:07.347235  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.347590  151865 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:35:07.347622  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.350874  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.351334  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.351368  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.351553  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.351736  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.351884  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.352043  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:35:07.436387  151865 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:35:07.440446  151865 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:35:07.440478  151865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:35:07.440571  151865 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:35:07.440652  151865 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:35:07.440663  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:35:07.440744  151865 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:35:07.449620  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:35:07.473090  151865 start.go:296] duration metric: took 125.866906ms for postStartSetup
	I0719 04:35:07.473139  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.473444  151865 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0719 04:35:07.473472  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.476155  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.476514  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.476542  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.476711  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.476902  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.477103  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.477235  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	W0719 04:35:07.559252  151865 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0719 04:35:07.559280  151865 fix.go:56] duration metric: took 1m31.776428279s for fixHost
	I0719 04:35:07.559303  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.562017  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.562292  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.562320  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.562479  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.562733  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.562909  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.563105  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.563276  151865 main.go:141] libmachine: Using SSH client type: native
	I0719 04:35:07.563437  151865 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0719 04:35:07.563447  151865 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:35:07.680985  151865 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363707.638853844
	
	I0719 04:35:07.681009  151865 fix.go:216] guest clock: 1721363707.638853844
	I0719 04:35:07.681016  151865 fix.go:229] Guest: 2024-07-19 04:35:07.638853844 +0000 UTC Remote: 2024-07-19 04:35:07.55928743 +0000 UTC m=+91.903391287 (delta=79.566414ms)
	I0719 04:35:07.681035  151865 fix.go:200] guest clock delta is within tolerance: 79.566414ms
	I0719 04:35:07.681041  151865 start.go:83] releasing machines lock for "ha-925161", held for 1m31.898203709s
	I0719 04:35:07.681058  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.681408  151865 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:35:07.684253  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.684689  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.684720  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.684881  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.685468  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.685670  151865 main.go:141] libmachine: (ha-925161) Calling .DriverName
	I0719 04:35:07.685783  151865 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:35:07.685832  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.685901  151865 ssh_runner.go:195] Run: cat /version.json
	I0719 04:35:07.685927  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHHostname
	I0719 04:35:07.688778  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.688802  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.689336  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.689363  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.689393  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:07.689406  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:07.689517  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.689670  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHPort
	I0719 04:35:07.689728  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.689810  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHKeyPath
	I0719 04:35:07.689931  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.689974  151865 main.go:141] libmachine: (ha-925161) Calling .GetSSHUsername
	I0719 04:35:07.690049  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:35:07.690163  151865 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/ha-925161/id_rsa Username:docker}
	I0719 04:35:07.845554  151865 ssh_runner.go:195] Run: systemctl --version
	I0719 04:35:07.852398  151865 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:35:08.006430  151865 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:35:08.012288  151865 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:35:08.012379  151865 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:35:08.021312  151865 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 04:35:08.021333  151865 start.go:495] detecting cgroup driver to use...
	I0719 04:35:08.021391  151865 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:35:08.037463  151865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:35:08.051513  151865 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:35:08.051575  151865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:35:08.064817  151865 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:35:08.077749  151865 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:35:08.223946  151865 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:35:08.373086  151865 docker.go:233] disabling docker service ...
	I0719 04:35:08.373179  151865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:35:08.391874  151865 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:35:08.404580  151865 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:35:08.559818  151865 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:35:08.717444  151865 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:35:08.732752  151865 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:35:08.750258  151865 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:35:08.750328  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.760675  151865 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:35:08.760751  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.770640  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.780133  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.789546  151865 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:35:08.799278  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.809283  151865 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.819312  151865 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:35:08.828716  151865 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:35:08.837396  151865 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:35:08.846127  151865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:35:08.989262  151865 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:35:09.438106  151865 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:35:09.438190  151865 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:35:09.442887  151865 start.go:563] Will wait 60s for crictl version
	I0719 04:35:09.442934  151865 ssh_runner.go:195] Run: which crictl
	I0719 04:35:09.446388  151865 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:35:09.487544  151865 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:35:09.487623  151865 ssh_runner.go:195] Run: crio --version
	I0719 04:35:09.514226  151865 ssh_runner.go:195] Run: crio --version
	I0719 04:35:09.542867  151865 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:35:09.544162  151865 main.go:141] libmachine: (ha-925161) Calling .GetIP
	I0719 04:35:09.546827  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:09.547202  151865 main.go:141] libmachine: (ha-925161) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:c3:8c", ip: ""} in network mk-ha-925161: {Iface:virbr1 ExpiryTime:2024-07-19 05:22:43 +0000 UTC Type:0 Mac:52:54:00:15:c3:8c Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-925161 Clientid:01:52:54:00:15:c3:8c}
	I0719 04:35:09.547233  151865 main.go:141] libmachine: (ha-925161) DBG | domain ha-925161 has defined IP address 192.168.39.246 and MAC address 52:54:00:15:c3:8c in network mk-ha-925161
	I0719 04:35:09.547408  151865 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:35:09.551827  151865 kubeadm.go:883] updating cluster {Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:35:09.551955  151865 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:35:09.551998  151865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:35:09.594369  151865 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:35:09.594393  151865 crio.go:433] Images already preloaded, skipping extraction
	I0719 04:35:09.594443  151865 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:35:09.626693  151865 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:35:09.626718  151865 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:35:09.626729  151865 kubeadm.go:934] updating node { 192.168.39.246 8443 v1.30.3 crio true true} ...
	I0719 04:35:09.626846  151865 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-925161 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:35:09.626926  151865 ssh_runner.go:195] Run: crio config
	I0719 04:35:09.671551  151865 cni.go:84] Creating CNI manager for ""
	I0719 04:35:09.671576  151865 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0719 04:35:09.671586  151865 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:35:09.671608  151865 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.246 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-925161 NodeName:ha-925161 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:35:09.671752  151865 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-925161"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:35:09.671770  151865 kube-vip.go:115] generating kube-vip config ...
	I0719 04:35:09.671812  151865 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:35:09.682679  151865 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:35:09.682794  151865 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:35:09.682858  151865 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:35:09.698765  151865 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:35:09.698838  151865 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 04:35:09.707474  151865 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0719 04:35:09.722824  151865 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:35:09.737996  151865 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0719 04:35:09.753411  151865 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:35:09.769703  151865 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:35:09.773346  151865 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:35:09.914993  151865 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:35:09.929494  151865 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161 for IP: 192.168.39.246
	I0719 04:35:09.929519  151865 certs.go:194] generating shared ca certs ...
	I0719 04:35:09.929541  151865 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:35:09.929734  151865 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:35:09.929785  151865 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:35:09.929796  151865 certs.go:256] generating profile certs ...
	I0719 04:35:09.929907  151865 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/client.key
	I0719 04:35:09.929935  151865 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.e5d4f658
	I0719 04:35:09.929950  151865 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.e5d4f658 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.246 192.168.39.102 192.168.39.190 192.168.39.254]
	I0719 04:35:10.047641  151865 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.e5d4f658 ...
	I0719 04:35:10.047673  151865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.e5d4f658: {Name:mk89a72b0e2e9fa9b2ea52621e70171d251b7911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:35:10.047847  151865 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.e5d4f658 ...
	I0719 04:35:10.047859  151865 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.e5d4f658: {Name:mkea9a4f1a5669869dceecbc30924745027a923d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:35:10.047930  151865 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt.e5d4f658 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt
	I0719 04:35:10.048077  151865 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key.e5d4f658 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key
	I0719 04:35:10.048207  151865 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key
	I0719 04:35:10.048223  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:35:10.048235  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:35:10.048249  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:35:10.048261  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:35:10.048275  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:35:10.048294  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:35:10.048306  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:35:10.048323  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:35:10.048376  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:35:10.048405  151865 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:35:10.048414  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:35:10.048438  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:35:10.048483  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:35:10.048512  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:35:10.048551  151865 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:35:10.048576  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:35:10.048589  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:35:10.048601  151865 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:35:10.049192  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:35:10.073338  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:35:10.095128  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:35:10.116541  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:35:10.138353  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0719 04:35:10.159442  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:35:10.180524  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:35:10.201852  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/ha-925161/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:35:10.223274  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:35:10.245315  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:35:10.266817  151865 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:35:10.289176  151865 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:35:10.305055  151865 ssh_runner.go:195] Run: openssl version
	I0719 04:35:10.310807  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:35:10.321049  151865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:35:10.325183  151865 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:35:10.325228  151865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:35:10.330487  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:35:10.339428  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:35:10.349420  151865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:35:10.353391  151865 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:35:10.353433  151865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:35:10.358669  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:35:10.367835  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:35:10.378177  151865 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:35:10.382221  151865 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:35:10.382261  151865 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:35:10.387375  151865 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:35:10.396014  151865 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:35:10.400295  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 04:35:10.405601  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 04:35:10.410768  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 04:35:10.415783  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 04:35:10.421003  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 04:35:10.426122  151865 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 04:35:10.431278  151865 kubeadm.go:392] StartCluster: {Name:ha-925161 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-925161 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.246 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:35:10.431443  151865 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 04:35:10.431490  151865 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 04:35:10.469617  151865 cri.go:89] found id: "09474a983e27b34e3673dcf223551088ab64428984deeb2a6ca8b535efe763f7"
	I0719 04:35:10.469641  151865 cri.go:89] found id: "8e2186685ce5385380419621a7d62e66847c580c15f2eb81e3568193d1d88a14"
	I0719 04:35:10.469644  151865 cri.go:89] found id: "9e5dff8dcfc51d728c29b5a44595ae338a5d83270e42b3d1c79d03ce684ae57f"
	I0719 04:35:10.469647  151865 cri.go:89] found id: "045e2b3cfc66b6262fa44a5bd06e4d8e1f9812326318a276daa8b6d80eae81cc"
	I0719 04:35:10.469651  151865 cri.go:89] found id: "f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672"
	I0719 04:35:10.469655  151865 cri.go:89] found id: "14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691"
	I0719 04:35:10.469659  151865 cri.go:89] found id: "1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036"
	I0719 04:35:10.469663  151865 cri.go:89] found id: "6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6"
	I0719 04:35:10.469667  151865 cri.go:89] found id: "ae55b7f5bd7bf842ca50cf5c5b471045260fe96b7a4a5ff03cf587c15f692412"
	I0719 04:35:10.469675  151865 cri.go:89] found id: "eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23"
	I0719 04:35:10.469691  151865 cri.go:89] found id: "b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010"
	I0719 04:35:10.469698  151865 cri.go:89] found id: "6794bae567b7e2c964bdfab18ab28a02cd5bad8823d55bae131a60e8dbefd012"
	I0719 04:35:10.469702  151865 cri.go:89] found id: "882ed073edd75b4a9831d3ded02cad425e74f0eab0bb34819f37757829560513"
	I0719 04:35:10.469707  151865 cri.go:89] found id: ""
	I0719 04:35:10.469758  151865 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.011707664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ef5c793-c059-4a58-8b81-31829fea4b71 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.014105653Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21597c57-bac7-47f5-9406-d9fdb0955233 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.015294757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721364017015265786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21597c57-bac7-47f5-9406-d9fdb0955233 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.015791650Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d129505-c11b-4c9f-8599-f3671c7260a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.015862769Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d129505-c11b-4c9f-8599-f3671c7260a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.016340026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de38d7f8ad913451255e6229dc934869431b48c5f872bcecb0f3e1a403da4cb4,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721363775087647557,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721363758094491359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721363757089338504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b00712a6d5c4d0880df5fe980d974c4610752b924c5d0dfb834e87567fca9,PodSandboxId:6ca18b08ad5cff45f7e0e989e6f170ffc8941bedaf873f70a71407c84aa34f2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363750196671162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee45f18b02073c7552269415ff4c082be8f7549456304a60fa420eaf656d817,PodSandboxId:1a9981cea564c7986a1621609a2660923a7d1c12bf1212ce32e5c9e49a7b682d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721363733157531296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0da61aa9c7d9fb5aa54fb9d86519c66d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721363717079792588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc,PodSandboxId:9a7e15608cb13a54b49490ee57950e0bf26fe26abc77f21ac40699335603a3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363717094993295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35,PodSandboxId:e3f389f30197bdfddffd259c3e20564e84b4c8d360a171e7fc586e409583883a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721363716928353935,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1961d5e5
da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31,PodSandboxId:b0fca0835d179f0ac31e1ae710482ca32ee75304f4f77e608e8d6c1b15002676,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716874890524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0,PodSandboxId:a066744bcf4f49e5350c8b2feb87f41aa9fca5658ad8ba7b17fbff019ff6fe06,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716985640108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b,PodSandboxId:1c0dbac79d5413baceab0f90d5d10b4817530c9b1715b96109ef52acda220867,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721363716813072324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8,PodSandboxId:3a44777f5a58f71965c80cd1daef31f89b8d60917507f14840d3a8030aa103fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721363716804356643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b
82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721363716792205557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a8
3b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721363716643436478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721363166324688671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015206021542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015144713551,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721363003130730891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721363002828851546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0ca
e4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721362982969032426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1721362982931161392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d129505-c11b-4c9f-8599-f3671c7260a6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.057407818Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea52de19-7f7f-4d5b-92eb-3c1111dbc5ee name=/runtime.v1.RuntimeService/Version
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.057496207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea52de19-7f7f-4d5b-92eb-3c1111dbc5ee name=/runtime.v1.RuntimeService/Version
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.058516116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c635ef9-6a6c-498a-94ed-f5222e38a1d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.059062215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721364017059034707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c635ef9-6a6c-498a-94ed-f5222e38a1d1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.059541290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=057605e3-62ee-4a35-9bbb-66a2efa39dee name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.059601790Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=057605e3-62ee-4a35-9bbb-66a2efa39dee name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.060024000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de38d7f8ad913451255e6229dc934869431b48c5f872bcecb0f3e1a403da4cb4,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721363775087647557,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721363758094491359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721363757089338504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b00712a6d5c4d0880df5fe980d974c4610752b924c5d0dfb834e87567fca9,PodSandboxId:6ca18b08ad5cff45f7e0e989e6f170ffc8941bedaf873f70a71407c84aa34f2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363750196671162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee45f18b02073c7552269415ff4c082be8f7549456304a60fa420eaf656d817,PodSandboxId:1a9981cea564c7986a1621609a2660923a7d1c12bf1212ce32e5c9e49a7b682d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721363733157531296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0da61aa9c7d9fb5aa54fb9d86519c66d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721363717079792588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc,PodSandboxId:9a7e15608cb13a54b49490ee57950e0bf26fe26abc77f21ac40699335603a3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363717094993295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35,PodSandboxId:e3f389f30197bdfddffd259c3e20564e84b4c8d360a171e7fc586e409583883a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721363716928353935,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1961d5e5
da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31,PodSandboxId:b0fca0835d179f0ac31e1ae710482ca32ee75304f4f77e608e8d6c1b15002676,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716874890524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0,PodSandboxId:a066744bcf4f49e5350c8b2feb87f41aa9fca5658ad8ba7b17fbff019ff6fe06,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716985640108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b,PodSandboxId:1c0dbac79d5413baceab0f90d5d10b4817530c9b1715b96109ef52acda220867,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721363716813072324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8,PodSandboxId:3a44777f5a58f71965c80cd1daef31f89b8d60917507f14840d3a8030aa103fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721363716804356643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b
82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721363716792205557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a8
3b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721363716643436478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721363166324688671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015206021542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015144713551,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721363003130730891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721363002828851546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0ca
e4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721362982969032426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1721362982931161392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=057605e3-62ee-4a35-9bbb-66a2efa39dee name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.079587156Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7da6b0a6-afb9-42bd-876f-365445dc5b1d name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.080212470Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6ca18b08ad5cff45f7e0e989e6f170ffc8941bedaf873f70a71407c84aa34f2a,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-xjdg9,Uid:5e5d1049-6c89-429b-96a8-cbb8abd2b26f,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721363750058687215,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T04:26:03.130636712Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1a9981cea564c7986a1621609a2660923a7d1c12bf1212ce32e5c9e49a7b682d,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-925161,Uid:0da61aa9c7d9fb5aa54fb9d86519c66d,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1721363733061055041,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0da61aa9c7d9fb5aa54fb9d86519c66d,},Annotations:map[string]string{kubernetes.io/config.hash: 0da61aa9c7d9fb5aa54fb9d86519c66d,kubernetes.io/config.seen: 2024-07-19T04:35:09.728684329Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a066744bcf4f49e5350c8b2feb87f41aa9fca5658ad8ba7b17fbff019ff6fe06,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-7wzcg,Uid:a434f69a-903d-4961-a54c-9a85cbc694b1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1721363716391568083,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-19T04:23:34.608406566Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b0fca0835d179f0ac31e1ae710482ca32ee75304f4f77e608e8d6c1b15002676,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-hwdsq,Uid:894f9528-78da-4cae-9ec6-8e82a3e73264,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721363716367771251,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T04:23:34.612478545Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9a7e15608cb13a54b49490ee57950e0bf26fe26abc77f21ac40699335603a3cc,Metadata:&PodSandboxMetadata{Name:kube-proxy-8dbqt,Uid:cd11aac3-62df-4603-8102-3384bcc100f1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721363716350368065,Labels:map[string]string{co
ntroller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T04:23:21.804029250Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-925161,Uid:349099d3ab7836a83b145a30eb9936d6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721363716343064385,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 349099d3ab7836a83b145a30eb
9936d6,kubernetes.io/config.seen: 2024-07-19T04:23:09.067264237Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-925161,Uid:7c423aaede6d00f00e13551d35c79c4b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721363716337038507,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.246:8443,kubernetes.io/config.hash: 7c423aaede6d00f00e13551d35c79c4b,kubernetes.io/config.seen: 2024-07-19T04:23:09.067260661Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3a44777f5a58f71965c80cd1daef31f89b8d60917507f14840d3a8030aa103fa,Metadata:&PodSandboxMetad
ata{Name:kube-scheduler-ha-925161,Uid:aa73bd154bae08cde433b82e51ec78df,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721363716334070714,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa73bd154bae08cde433b82e51ec78df,kubernetes.io/config.seen: 2024-07-19T04:23:09.067265312Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c0dbac79d5413baceab0f90d5d10b4817530c9b1715b96109ef52acda220867,Metadata:&PodSandboxMetadata{Name:etcd-ha-925161,Uid:36cca920f3f48d0fa2da37f2a22f12ba,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721363716327075089,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.246:2379,kubernetes.io/config.hash: 36cca920f3f48d0fa2da37f2a22f12ba,kubernetes.io/config.seen: 2024-07-19T04:23:09.067266992Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e3f389f30197bdfddffd259c3e20564e84b4c8d360a171e7fc586e409583883a,Metadata:&PodSandboxMetadata{Name:kindnet-fsr5f,Uid:988e1118-927a-4468-ba25-3a78d8d06919,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721363716310203489,Labels:map[string]string{app: kindnet,controller-revision-hash: 545f566499,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T04:23:21.836179319Z,kubernetes.io/config.source:
api,},RuntimeHandler:,},&PodSandbox{Id:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:bf27da3d-f736-4742-9af5-2c0a024075ec,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721363716308586428,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imag
ePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-19T04:23:34.615001564Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7da6b0a6-afb9-42bd-876f-365445dc5b1d name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.081164644Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cecb875b-c56e-4213-b451-e2f6539b93e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.081279477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cecb875b-c56e-4213-b451-e2f6539b93e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.081516706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de38d7f8ad913451255e6229dc934869431b48c5f872bcecb0f3e1a403da4cb4,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721363775087647557,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721363758094491359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721363757089338504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b00712a6d5c4d0880df5fe980d974c4610752b924c5d0dfb834e87567fca9,PodSandboxId:6ca18b08ad5cff45f7e0e989e6f170ffc8941bedaf873f70a71407c84aa34f2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363750196671162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee45f18b02073c7552269415ff4c082be8f7549456304a60fa420eaf656d817,PodSandboxId:1a9981cea564c7986a1621609a2660923a7d1c12bf1212ce32e5c9e49a7b682d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721363733157531296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0da61aa9c7d9fb5aa54fb9d86519c66d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc,PodSandboxId:9a7e15608cb13a54b49490ee57950e0bf26fe26abc77f21ac40699335603a3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363717094993295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35,PodSandboxId:e3f389f30197bdfddffd259c3e20564e84b4c8d360a171e7fc586e409583883a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721363716928353935,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:1961d5e5da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31,PodSandboxId:b0fca0835d179f0ac31e1ae710482ca32ee75304f4f77e608e8d6c1b15002676,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716874890524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0,PodSandboxId:a066744bcf4f49e5350c8b2feb87f41aa9fca5658ad8ba7b17fbff019ff6fe06,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716985640108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b,PodSandboxId:1c0dbac79d5413baceab0f90d5d10b4817530c9b1715b96109ef52acda220867,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721363716813072324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8,PodSandboxId:3a44777f5a58f71965c80cd1daef31f89b8d60917507f14840d3a8030aa103fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721363716804356643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd15
4bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cecb875b-c56e-4213-b451-e2f6539b93e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.102679839Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f3d22df-3750-45fc-9d3f-164579f6b3cb name=/runtime.v1.RuntimeService/Version
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.102761727Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f3d22df-3750-45fc-9d3f-164579f6b3cb name=/runtime.v1.RuntimeService/Version
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.104126926Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9930dd5a-35ba-4348-8173-88761c98fa22 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.104653636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721364017104630283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:144984,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9930dd5a-35ba-4348-8173-88761c98fa22 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.105208621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=999f3e92-e549-49f4-89cb-dd45a33f5eab name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.105277030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=999f3e92-e549-49f4-89cb-dd45a33f5eab name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:40:17 ha-925161 crio[3828]: time="2024-07-19 04:40:17.106151971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de38d7f8ad913451255e6229dc934869431b48c5f872bcecb0f3e1a403da4cb4,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721363775087647557,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721363758094491359,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a83b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721363757089338504,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Annotations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f26b00712a6d5c4d0880df5fe980d974c4610752b924c5d0dfb834e87567fca9,PodSandboxId:6ca18b08ad5cff45f7e0e989e6f170ffc8941bedaf873f70a71407c84aa34f2a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721363750196671162,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annotations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fee45f18b02073c7552269415ff4c082be8f7549456304a60fa420eaf656d817,PodSandboxId:1a9981cea564c7986a1621609a2660923a7d1c12bf1212ce32e5c9e49a7b682d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1721363733157531296,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0da61aa9c7d9fb5aa54fb9d86519c66d,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d,PodSandboxId:487e1cddacb84e748c6dfde9fd66ec573f5c3b4c3bc99fa2e21511e78ce3652b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721363717079792588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf27da3d-f736-4742-9af5-2c0a024075ec,},Annotations:map[string]string{io.kubernetes.container.hash: f97e67c5,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc,PodSandboxId:9a7e15608cb13a54b49490ee57950e0bf26fe26abc77f21ac40699335603a3cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721363717094993295,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35,PodSandboxId:e3f389f30197bdfddffd259c3e20564e84b4c8d360a171e7fc586e409583883a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721363716928353935,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1961d5e5
da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31,PodSandboxId:b0fca0835d179f0ac31e1ae710482ca32ee75304f4f77e608e8d6c1b15002676,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716874890524,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kubernetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0,PodSandboxId:a066744bcf4f49e5350c8b2feb87f41aa9fca5658ad8ba7b17fbff019ff6fe06,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721363716985640108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b,PodSandboxId:1c0dbac79d5413baceab0f90d5d10b4817530c9b1715b96109ef52acda220867,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721363716813072324,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8,PodSandboxId:3a44777f5a58f71965c80cd1daef31f89b8d60917507f14840d3a8030aa103fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721363716804356643,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b
82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d,PodSandboxId:63f316747da1dc5053319feffa25889a8f469ebabe3fca1227fa8a4a377b6dd0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721363716792205557,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 349099d3ab7836a8
3b145a30eb9936d6,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3,PodSandboxId:1fa6c472995417d2338ecc16a10635df2b9e4c896f1c46924054ff1a34148ab3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721363716643436478,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c423aaede6d00f00e13551d35c79c4b,},Ann
otations:map[string]string{io.kubernetes.container.hash: 9eff95bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:376dac90130c20ad5ee1fd7cda6913750ce2847ab6b24b8a5ade8f85a7933736,PodSandboxId:0d44fb43a7c0f7260d182a488fbebec1e6a62c08f3bbfbe0601b399af7548cc1,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721363166324688671,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xjdg9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5e5d1049-6c89-429b-96a8-cbb8abd2b26f,},Annot
ations:map[string]string{io.kubernetes.container.hash: 3ac1d6ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672,PodSandboxId:0bb04d64362d6e31033c86d2709d8dea8839e5561f013e6cb8daeb9084a3c238,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015206021542,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-hwdsq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 894f9528-78da-4cae-9ec6-8e82a3e73264,},Annotations:map[string]string{io.kube
rnetes.container.hash: 7fba949d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691,PodSandboxId:62bcd5e2d22cb8954eda67d5b70c31e06ef2499d3b74d790b9661ca694f80657,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721363015144713551,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-7wzcg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a434f69a-903d-4961-a54c-9a85cbc694b1,},Annotations:map[string]string{io.kubernetes.container.hash: 40e24975,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036,PodSandboxId:b3c277ef1f53ba98a24302115099ce4ef05d3e256d942b1f4d3157995b54ecae,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721363003130730891,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fsr5f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 988e1118-927a-4468-ba25-3a78d8d06919,},Annotations:map[string]string{io.kubernetes.container.hash: 7fbb852b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6,PodSandboxId:696364d98fd5c2d4bd655bdb3ca5e141f8f51b0ca3c66da051ac248dc390a4d8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa59
2b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721363002828851546,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8dbqt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd11aac3-62df-4603-8102-3384bcc100f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3deffe05,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23,PodSandboxId:fa3836c68c71d461c5bb30a2c7d5752ee698b31422b3aa8d734b871de431a0e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0ca
e4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721362982969032426,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa73bd154bae08cde433b82e51ec78df,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010,PodSandboxId:a03be60cf1fe9442e05ead4b2fd503182a4cb9cd50acaaedfd7ef1a4024aab8c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_EXITED,CreatedAt:1721362982931161392,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-925161,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36cca920f3f48d0fa2da37f2a22f12ba,},Annotations:map[string]string{io.kubernetes.container.hash: 64aa1422,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=999f3e92-e549-49f4-89cb-dd45a33f5eab name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	de38d7f8ad913       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   487e1cddacb84       storage-provisioner
	f26b178fe8fc4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   63f316747da1d       kube-controller-manager-ha-925161
	e01d6998bdb35       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   1fa6c47299541       kube-apiserver-ha-925161
	f26b00712a6d5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   6ca18b08ad5cf       busybox-fc5497c4f-xjdg9
	fee45f18b0207       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   1a9981cea564c       kube-vip-ha-925161
	66526fd5cc961       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   9a7e15608cb13       kube-proxy-8dbqt
	458930eb4d222       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   487e1cddacb84       storage-provisioner
	4093158c49f1e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   a066744bcf4f4       coredns-7db6d8ff4d-7wzcg
	76385fe3aa9b3       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      5 minutes ago       Running             kindnet-cni               1                   e3f389f30197b       kindnet-fsr5f
	1961d5e5da1a0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   b0fca0835d179       coredns-7db6d8ff4d-hwdsq
	38d19d3723a70       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   1c0dbac79d541       etcd-ha-925161
	3fd2dbf80d04b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   3a44777f5a58f       kube-scheduler-ha-925161
	2acd3b5d4b137       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   63f316747da1d       kube-controller-manager-ha-925161
	31e39f44635db       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   1fa6c47299541       kube-apiserver-ha-925161
	376dac90130c2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   0d44fb43a7c0f       busybox-fc5497c4f-xjdg9
	f8fbd19dd4d99       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   0bb04d64362d6       coredns-7db6d8ff4d-hwdsq
	14f21e70e6b65       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   62bcd5e2d22cb       coredns-7db6d8ff4d-7wzcg
	1109d10f2b3d4       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      16 minutes ago      Exited              kindnet-cni               0                   b3c277ef1f53b       kindnet-fsr5f
	6c9e12889a166       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   696364d98fd5c       kube-proxy-8dbqt
	eeef22350ca0f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      17 minutes ago      Exited              kube-scheduler            0                   fa3836c68c71d       kube-scheduler-ha-925161
	b041f48cc90cf       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   a03be60cf1fe9       etcd-ha-925161
	
	
	==> coredns [14f21e70e6b65805b44f3ff4e90dd773f402ad0eb25822b81eda2c2816bc7691] <==
	[INFO] 10.244.1.2:41971 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00346851s
	[INFO] 10.244.1.2:57720 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114773s
	[INFO] 10.244.2.3:58305 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001754058s
	[INFO] 10.244.2.3:54206 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118435s
	[INFO] 10.244.2.3:37056 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000234861s
	[INFO] 10.244.2.3:45425 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073142s
	[INFO] 10.244.0.4:54647 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00007602s
	[INFO] 10.244.0.4:33742 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001338144s
	[INFO] 10.244.1.2:58214 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123014s
	[INFO] 10.244.1.2:58591 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083326s
	[INFO] 10.244.1.2:33227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000196172s
	[INFO] 10.244.2.3:49582 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115766s
	[INFO] 10.244.2.3:46761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109526s
	[INFO] 10.244.0.4:50248 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066399s
	[INFO] 10.244.1.2:45766 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00012847s
	[INFO] 10.244.1.2:57759 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000145394s
	[INFO] 10.244.2.3:50037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160043s
	[INFO] 10.244.2.3:49469 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000075305s
	[INFO] 10.244.2.3:39504 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000057986s
	[INFO] 10.244.0.4:39098 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096095s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1961d5e5da1a0a8c475b7c70eaf56087054dbc2a459fc00fd013b69ecb5d5b31] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58630->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1233165234]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 04:35:28.338) (total time: 10550ms):
	Trace[1233165234]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58630->10.96.0.1:443: read: connection reset by peer 10550ms (04:35:38.888)
	Trace[1233165234]: [10.550381464s] [10.550381464s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:58630->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47410->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:47410->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58670->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:58670->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4093158c49f1e636104b6473da67cd759726ffb37667deb0fdb30953bfff3ce0] <==
	Trace[245165649]: [10.001410163s] [10.001410163s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f8fbd19dd4d996dc34ade93b87b023c88d559af11bbea8aaf9f9b3a2e6f05672] <==
	[INFO] 10.244.2.3:48698 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001253504s
	[INFO] 10.244.2.3:45424 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060715s
	[INFO] 10.244.0.4:53435 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016485s
	[INFO] 10.244.0.4:47050 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001790838s
	[INFO] 10.244.0.4:38074 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058109s
	[INFO] 10.244.0.4:53487 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000066861s
	[INFO] 10.244.0.4:48230 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00012907s
	[INFO] 10.244.0.4:45713 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053151s
	[INFO] 10.244.1.2:40224 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000119446s
	[INFO] 10.244.2.3:48643 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000101063s
	[INFO] 10.244.2.3:59393 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008526s
	[INFO] 10.244.0.4:38457 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103892s
	[INFO] 10.244.0.4:36242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015645s
	[INFO] 10.244.0.4:47871 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076477s
	[INFO] 10.244.1.2:44263 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176905s
	[INFO] 10.244.1.2:56297 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000215661s
	[INFO] 10.244.2.3:45341 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000148843s
	[INFO] 10.244.0.4:41990 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105346s
	[INFO] 10.244.0.4:43204 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121535s
	[INFO] 10.244.0.4:60972 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000251518s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-925161
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_23_10_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:23:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:40:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:36:20 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:36:20 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:36:20 +0000   Fri, 19 Jul 2024 04:23:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:36:20 +0000   Fri, 19 Jul 2024 04:23:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-925161
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ff8c87164fa44c4f827d29ad58165cee
	  System UUID:                ff8c8716-4fa4-4c4f-827d-29ad58165cee
	  Boot ID:                    82d231ce-d7a6-41a1-a656-2e7410a6f84c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xjdg9              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-7wzcg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-hwdsq             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-925161                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-fsr5f                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-925161             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-925161    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-8dbqt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-925161             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-925161                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   Starting                 4m16s                kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                  kubelet          Node ha-925161 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                  kubelet          Node ha-925161 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                  kubelet          Node ha-925161 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                  node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal   NodeReady                16m                  kubelet          Node ha-925161 status is now: NodeReady
	  Normal   RegisteredNode           15m                  node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Warning  ContainerGCFailed        5m8s (x2 over 6m8s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m13s                node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal   RegisteredNode           4m6s                 node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	  Normal   RegisteredNode           3m9s                 node-controller  Node ha-925161 event: Registered Node ha-925161 in Controller
	
	
	Name:               ha-925161-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_24_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:24:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:40:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:38:54 +0000   Fri, 19 Jul 2024 04:38:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:38:54 +0000   Fri, 19 Jul 2024 04:38:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:38:54 +0000   Fri, 19 Jul 2024 04:38:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:38:54 +0000   Fri, 19 Jul 2024 04:38:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-925161-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9158ff8415464fc08c01f2344e6694f7
	  System UUID:                9158ff84-1546-4fc0-8c01-f2344e6694f7
	  Boot ID:                    f097e6d1-5160-4643-ae17-6e026c47bbf2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5785p                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-925161-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-dkctc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-925161-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-925161-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-s6df4                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-925161-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-925161-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 3m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-925161-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-925161-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-925161-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           15m                    node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-925161-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m47s (x8 over 4m47s)  kubelet          Node ha-925161-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s (x8 over 4m47s)  kubelet          Node ha-925161-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m47s (x7 over 4m47s)  kubelet          Node ha-925161-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  RegisteredNode           3m9s                   node-controller  Node ha-925161-m02 event: Registered Node ha-925161-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-925161-m02 status is now: NodeNotReady
	
	
	Name:               ha-925161-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-925161-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-925161
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_27_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:27:29 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-925161-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:37:50 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 04:37:29 +0000   Fri, 19 Jul 2024 04:38:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 04:37:29 +0000   Fri, 19 Jul 2024 04:38:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 04:37:29 +0000   Fri, 19 Jul 2024 04:38:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 04:37:29 +0000   Fri, 19 Jul 2024 04:38:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.75
	  Hostname:    ha-925161-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e08274d02fa64707986686183076854f
	  System UUID:                e08274d0-2fa6-4707-9866-86183076854f
	  Boot ID:                    af36be98-8b95-4bf4-abe3-9ae5efece267
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dw4vp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-dnwxp              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-f4fgd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)      kubelet          Node ha-925161-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)      kubelet          Node ha-925161-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)      kubelet          Node ha-925161-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-925161-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m13s                  node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   RegisteredNode           4m6s                   node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   NodeNotReady             3m32s                  node-controller  Node ha-925161-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m9s                   node-controller  Node ha-925161-m04 event: Registered Node ha-925161-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m48s (x2 over 2m48s)  kubelet          Node ha-925161-m04 has been rebooted, boot id: af36be98-8b95-4bf4-abe3-9ae5efece267
	  Normal   NodeHasSufficientMemory  2m48s (x3 over 2m48s)  kubelet          Node ha-925161-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x3 over 2m48s)  kubelet          Node ha-925161-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x3 over 2m48s)  kubelet          Node ha-925161-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m48s                  kubelet          Node ha-925161-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m48s                  kubelet          Node ha-925161-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-925161-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +8.442247] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.062592] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054468] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.195426] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.118864] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.257746] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.980513] systemd-fstab-generator[774]: Ignoring "noauto" option for root device
	[Jul19 04:23] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.065569] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.069928] systemd-fstab-generator[1370]: Ignoring "noauto" option for root device
	[  +0.091097] kauditd_printk_skb: 79 callbacks suppressed
	[ +13.840611] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.224120] kauditd_printk_skb: 38 callbacks suppressed
	[Jul19 04:24] kauditd_printk_skb: 26 callbacks suppressed
	[Jul19 04:32] kauditd_printk_skb: 1 callbacks suppressed
	[Jul19 04:35] systemd-fstab-generator[3742]: Ignoring "noauto" option for root device
	[  +0.159179] systemd-fstab-generator[3754]: Ignoring "noauto" option for root device
	[  +0.181418] systemd-fstab-generator[3769]: Ignoring "noauto" option for root device
	[  +0.156782] systemd-fstab-generator[3781]: Ignoring "noauto" option for root device
	[  +0.275585] systemd-fstab-generator[3810]: Ignoring "noauto" option for root device
	[  +0.922859] systemd-fstab-generator[3926]: Ignoring "noauto" option for root device
	[  +6.529241] kauditd_printk_skb: 127 callbacks suppressed
	[ +16.805791] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.878952] kauditd_printk_skb: 1 callbacks suppressed
	[Jul19 04:36] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [38d19d3723a70de48633fa0df9c13a3b49ac927058ddbec544d2e1756ed2128b] <==
	{"level":"warn","ts":"2024-07-19T04:36:52.954336Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"29da33e6eb84f18b","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-19T04:37:33.10659Z","caller":"traceutil/trace.go:171","msg":"trace[352178164] linearizableReadLoop","detail":"{readStateIndex:3106; appliedIndex:3107; }","duration":"132.263544ms","start":"2024-07-19T04:37:32.974286Z","end":"2024-07-19T04:37:33.10655Z","steps":["trace[352178164] 'read index received'  (duration: 132.259346ms)","trace[352178164] 'applied index is now lower than readState.Index'  (duration: 3.288µs)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:37:33.107559Z","caller":"traceutil/trace.go:171","msg":"trace[1732292203] transaction","detail":"{read_only:false; response_revision:2658; number_of_response:1; }","duration":"134.208009ms","start":"2024-07-19T04:37:32.973333Z","end":"2024-07-19T04:37:33.107541Z","steps":["trace[1732292203] 'process raft request'  (duration: 133.834226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:37:33.111237Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.87939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T04:37:33.111413Z","caller":"traceutil/trace.go:171","msg":"trace[949516260] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2658; }","duration":"137.13256ms","start":"2024-07-19T04:37:32.974261Z","end":"2024-07-19T04:37:33.111394Z","steps":["trace[949516260] 'agreement among raft nodes before linearized reading'  (duration: 133.144142ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:37:43.366899Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b19954eb16571c64 switched to configuration voters=(12797353184818830436 16795722768998361870)"}
	{"level":"info","ts":"2024-07-19T04:37:43.369413Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"7954d586cad9e091","local-member-id":"b19954eb16571c64","removed-remote-peer-id":"29da33e6eb84f18b","removed-remote-peer-urls":["https://192.168.39.190:2380"]}
	{"level":"info","ts":"2024-07-19T04:37:43.369553Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"warn","ts":"2024-07-19T04:37:43.369696Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:37:43.369751Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"warn","ts":"2024-07-19T04:37:43.369733Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"b19954eb16571c64","removed-member-id":"29da33e6eb84f18b"}
	{"level":"warn","ts":"2024-07-19T04:37:43.36999Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-07-19T04:37:43.370096Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:37:43.37015Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:37:43.370226Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"warn","ts":"2024-07-19T04:37:43.370505Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b","error":"context canceled"}
	{"level":"warn","ts":"2024-07-19T04:37:43.370588Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"29da33e6eb84f18b","error":"failed to read 29da33e6eb84f18b on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-19T04:37:43.370643Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"warn","ts":"2024-07-19T04:37:43.370876Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b","error":"context canceled"}
	{"level":"info","ts":"2024-07-19T04:37:43.371001Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:37:43.371051Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:37:43.371106Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"b19954eb16571c64","removed-remote-peer-id":"29da33e6eb84f18b"}
	{"level":"warn","ts":"2024-07-19T04:37:43.388364Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b19954eb16571c64","remote-peer-id-stream-handler":"b19954eb16571c64","remote-peer-id-from":"29da33e6eb84f18b"}
	{"level":"warn","ts":"2024-07-19T04:37:43.392921Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.190:55868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-19T04:37:43.429118Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.190:55888","server-name":"","error":"EOF"}
	
	
	==> etcd [b041f48cc90cfab5ce219992de0d8ffb58d778e852683bd80044d359a0b7d010] <==
	{"level":"info","ts":"2024-07-19T04:33:36.53321Z","caller":"traceutil/trace.go:171","msg":"trace[296995039] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; }","duration":"519.813015ms","start":"2024-07-19T04:33:36.013393Z","end":"2024-07-19T04:33:36.533207Z","steps":["trace[296995039] 'agreement among raft nodes before linearized reading'  (duration: 500.946689ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:33:36.533224Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T04:33:36.013389Z","time spent":"519.830447ms","remote":"127.0.0.1:43672","response type":"/etcdserverpb.KV/Range","request count":0,"request size":63,"response count":0,"response size":0,"request content":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" limit:10000 "}
	2024/07/19 04:33:36 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T04:33:36.533314Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T04:33:28.998363Z","time spent":"7.534563905s","remote":"127.0.0.1:43502","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":0,"response size":0,"request content":"key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" limit:10000 "}
	2024/07/19 04:33:36 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-19T04:33:36.58319Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.246:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:33:36.583399Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.246:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T04:33:36.584651Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b19954eb16571c64","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-19T04:33:36.584875Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.584913Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.584971Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.585098Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.585145Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.58519Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.585228Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"e91664def0166b0e"}
	{"level":"info","ts":"2024-07-19T04:33:36.585236Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585245Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585264Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585342Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585457Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585509Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b19954eb16571c64","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.585522Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"29da33e6eb84f18b"}
	{"level":"info","ts":"2024-07-19T04:33:36.587816Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2024-07-19T04:33:36.588095Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.246:2380"}
	{"level":"info","ts":"2024-07-19T04:33:36.588168Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-925161","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.246:2380"],"advertise-client-urls":["https://192.168.39.246:2379"]}
	
	
	==> kernel <==
	 04:40:17 up 17 min,  0 users,  load average: 0.29, 0.26, 0.20
	Linux ha-925161 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1109d10f2b3d47b16b6903644a11f08183882b4c704e06aa23088c57b2fa5036] <==
	I0719 04:33:14.195172       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:33:14.195289       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:33:14.195455       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:33:14.195511       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:33:14.195578       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:33:14.195597       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:33:14.195664       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:33:14.195683       1 main.go:303] handling current node
	I0719 04:33:24.194969       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:33:24.195082       1 main.go:303] handling current node
	I0719 04:33:24.195111       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:33:24.195128       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:33:24.195275       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:33:24.195364       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:33:24.195498       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:33:24.195522       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:33:34.195409       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:33:34.195671       1 main.go:303] handling current node
	I0719 04:33:34.195716       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:33:34.195738       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:33:34.195996       1 main.go:299] Handling node with IPs: map[192.168.39.190:{}]
	I0719 04:33:34.196056       1 main.go:326] Node ha-925161-m03 has CIDR [10.244.2.0/24] 
	I0719 04:33:34.196146       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:33:34.196180       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	E0719 04:33:35.016562       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	
	
	==> kindnet [76385fe3aa9b3b1e67ba577f6669ee7c0a1a6cd4a3652f4043910a7d5e44af35] <==
	I0719 04:39:28.001342       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:39:37.998716       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:39:37.998797       1 main.go:303] handling current node
	I0719 04:39:37.998823       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:39:37.998832       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:39:37.999080       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:39:37.999102       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:39:47.993151       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:39:47.993287       1 main.go:303] handling current node
	I0719 04:39:47.993324       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:39:47.993343       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:39:47.993508       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:39:47.993570       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:39:57.997899       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:39:57.997972       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:39:57.998138       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:39:57.998160       1 main.go:303] handling current node
	I0719 04:39:57.998182       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:39:57.998187       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	I0719 04:40:08.000173       1 main.go:299] Handling node with IPs: map[192.168.39.75:{}]
	I0719 04:40:08.000287       1 main.go:326] Node ha-925161-m04 has CIDR [10.244.3.0/24] 
	I0719 04:40:08.000445       1 main.go:299] Handling node with IPs: map[192.168.39.246:{}]
	I0719 04:40:08.000467       1 main.go:303] handling current node
	I0719 04:40:08.000482       1 main.go:299] Handling node with IPs: map[192.168.39.102:{}]
	I0719 04:40:08.000487       1 main.go:326] Node ha-925161-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [31e39f44635dba72799e46f73a50c13c6ba21bf7a3b7dc6391fc917ef06f12f3] <==
	I0719 04:35:17.198245       1 options.go:221] external host was not specified, using 192.168.39.246
	I0719 04:35:17.201652       1 server.go:148] Version: v1.30.3
	I0719 04:35:17.201698       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:35:17.871615       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0719 04:35:17.871779       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 04:35:17.877034       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0719 04:35:17.877103       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0719 04:35:17.877280       1 instance.go:299] Using reconciler: lease
	W0719 04:35:37.861886       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0719 04:35:37.862153       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0719 04:35:37.877886       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	W0719 04:35:37.879147       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	
	
	==> kube-apiserver [e01d6998bdb35fcf68bf94a93f0f52290926f382244cf5c91f43ccb8653b233c] <==
	I0719 04:35:59.424218       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0719 04:35:59.489014       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 04:35:59.489468       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 04:35:59.489841       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 04:35:59.490215       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 04:35:59.490279       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 04:35:59.490305       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 04:35:59.495696       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	W0719 04:35:59.500112       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.190]
	I0719 04:35:59.525181       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 04:35:59.525241       1 aggregator.go:165] initial CRD sync complete...
	I0719 04:35:59.525269       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 04:35:59.525275       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 04:35:59.525280       1 cache.go:39] Caches are synced for autoregister controller
	I0719 04:35:59.540221       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 04:35:59.549992       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 04:35:59.550062       1 policy_source.go:224] refreshing policies
	I0719 04:35:59.587686       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 04:35:59.601286       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 04:35:59.613239       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0719 04:35:59.629179       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0719 04:36:00.395459       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0719 04:36:00.786621       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.190 192.168.39.246]
	W0719 04:36:10.762057       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.246]
	W0719 04:38:00.768861       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.246]
	
	
	==> kube-controller-manager [2acd3b5d4b137fa864e9c0ca3e381b555ea7b28350ff23870b7291cd3f9ac68d] <==
	I0719 04:35:18.313763       1 serving.go:380] Generated self-signed cert in-memory
	I0719 04:35:18.883345       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0719 04:35:18.883378       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:35:18.885035       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0719 04:35:18.885171       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0719 04:35:18.885340       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0719 04:35:18.885613       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0719 04:35:38.887384       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.246:8443/healthz\": dial tcp 192.168.39.246:8443: connect: connection refused"
	
	
	==> kube-controller-manager [f26b178fe8fc44f1c17d3c9396d1db5bf694da9604aee967ff718d1294de0e4d] <==
	I0719 04:37:42.165344       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.932µs"
	I0719 04:37:42.540106       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.797µs"
	I0719 04:37:42.555131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="118.24µs"
	I0719 04:37:42.561884       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="186.753µs"
	I0719 04:37:44.097082       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.749605ms"
	I0719 04:37:44.097294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.377µs"
	I0719 04:37:54.924738       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-925161-m04"
	E0719 04:37:54.958061       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-925161-m03", UID:"1232a9e8-ed29-4f8f-be29-aa862dd45d74", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-925161-m03", UID:"cd90ccd5-a4fa-4721-8cff-fe2bdc06393f", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-925161-m03" not found
	E0719 04:37:54.962171       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-925161-m03", UID:"e312a07c-dd60-4e13-8783-8dff1b86c3ba", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-925161-m03", UID:"cd90ccd5-a4fa-4721-8cff-fe2bdc06393f", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-925161-m03" not found
	E0719 04:38:11.783077       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	E0719 04:38:11.783261       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	E0719 04:38:11.783297       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	E0719 04:38:11.783321       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	E0719 04:38:11.783344       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	E0719 04:38:31.784375       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	E0719 04:38:31.784456       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	E0719 04:38:31.784465       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	E0719 04:38:31.784470       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	E0719 04:38:31.784475       1 gc_controller.go:153] "Failed to get node" err="node \"ha-925161-m03\" not found" logger="pod-garbage-collector-controller" node="ha-925161-m03"
	I0719 04:38:31.831919       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.104071ms"
	I0719 04:38:31.832270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.545µs"
	I0719 04:38:35.168613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.860046ms"
	I0719 04:38:35.168766       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.97µs"
	I0719 04:38:50.872735       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.52004ms"
	I0719 04:38:50.873901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.317µs"
	
	
	==> kube-proxy [66526fd5cc961dcad93f9334ace4639ff28d46e16c57e6a7665a73c0106842bc] <==
	E0719 04:35:20.767528       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:35:23.840052       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:35:26.912606       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:35:33.056709       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:35:42.272344       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0719 04:36:00.704659       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-925161\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0719 04:36:00.704844       1 server.go:1032] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0719 04:36:00.764479       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:36:00.764687       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:36:00.764714       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:36:00.783533       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:36:00.783999       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:36:00.784248       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:36:00.789648       1 config.go:192] "Starting service config controller"
	I0719 04:36:00.789748       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:36:00.789842       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:36:00.789858       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:36:00.791772       1 config.go:319] "Starting node config controller"
	I0719 04:36:00.791795       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:36:00.890546       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:36:00.890628       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:36:00.892091       1 shared_informer.go:320] Caches are synced for node config
	W0719 04:38:45.272593       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0719 04:38:45.272897       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0719 04:38:45.272930       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [6c9e12889a1662da7b7e18d67865a94825c080659d17bd601943c475e944e3b6] <==
	E0719 04:32:25.663836       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:28.735343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:28.735402       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:28.735478       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:28.735509       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:28.735486       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:28.735581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:34.880493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:34.880652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:34.880879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:34.881060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:34.881270       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:34.881358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:44.095578       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:44.096848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:47.167690       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:47.167793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:47.167983       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:47.168024       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:32:59.455569       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:32:59.455692       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1993": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:33:05.600409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:33:05.600495       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-925161&resourceVersion=2038": dial tcp 192.168.39.254:8443: connect: no route to host
	W0719 04:33:05.600754       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	E0719 04:33:05.600819       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=2009": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [3fd2dbf80d04b99672d475d13595fc0af6b058ef669561db78a22ceb235839f8] <==
	W0719 04:35:56.409684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0719 04:35:56.409742       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	W0719 04:35:56.667802       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.39.246:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0719 04:35:56.667850       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.246:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	W0719 04:35:56.852873       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	E0719 04:35:56.852922       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.246:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.246:8443: connect: connection refused
	W0719 04:35:59.431782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 04:35:59.431843       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 04:35:59.431908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:35:59.431968       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:35:59.432032       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:35:59.432058       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:35:59.432094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 04:35:59.432117       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 04:35:59.432162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 04:35:59.432185       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 04:35:59.432262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 04:35:59.432288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:35:59.432337       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 04:35:59.432371       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0719 04:36:10.304245       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 04:37:40.071355       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dw4vp\": pod busybox-fc5497c4f-dw4vp is already assigned to node \"ha-925161-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-dw4vp" node="ha-925161-m04"
	E0719 04:37:40.071663       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2c49584d-418f-412f-9c7e-d346de0741d1(default/busybox-fc5497c4f-dw4vp) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-dw4vp"
	E0719 04:37:40.071748       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dw4vp\": pod busybox-fc5497c4f-dw4vp is already assigned to node \"ha-925161-m04\"" pod="default/busybox-fc5497c4f-dw4vp"
	I0719 04:37:40.071816       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-dw4vp" node="ha-925161-m04"
	
	
	==> kube-scheduler [eeef22350ca0f1246fb4f32ba2c27abda6e12fc362e76159e9806d53d4296a23] <==
	W0719 04:33:30.107914       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:33:30.107976       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:33:30.241858       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 04:33:30.242045       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:33:30.322760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 04:33:30.322793       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 04:33:30.538544       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 04:33:30.538680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 04:33:30.567093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:33:30.567128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:33:30.608485       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 04:33:30.608580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 04:33:30.799999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 04:33:30.800041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 04:33:31.140189       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 04:33:31.140220       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 04:33:31.216854       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 04:33:31.216925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 04:33:31.302828       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 04:33:31.303003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 04:33:31.332417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:33:31.332660       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:33:36.469087       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:33:36.469124       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:33:36.500702       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 19 04:36:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:36:09 ha-925161 kubelet[1377]: I0719 04:36:09.155888    1377 scope.go:117] "RemoveContainer" containerID="045e2b3cfc66b6262fa44a5bd06e4d8e1f9812326318a276daa8b6d80eae81cc"
	Jul 19 04:36:15 ha-925161 kubelet[1377]: I0719 04:36:15.078514    1377 scope.go:117] "RemoveContainer" containerID="458930eb4d22263ff4b3c2565edc5f57985aadb6c9bccfa7be738ef94f1f5a3d"
	Jul 19 04:36:37 ha-925161 kubelet[1377]: I0719 04:36:37.078468    1377 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-925161" podUID="8d01a874-336e-476c-b079-852250b3bbcd"
	Jul 19 04:36:37 ha-925161 kubelet[1377]: I0719 04:36:37.097148    1377 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-925161"
	Jul 19 04:37:09 ha-925161 kubelet[1377]: E0719 04:37:09.119326    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:37:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:37:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:37:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:37:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:38:09 ha-925161 kubelet[1377]: E0719 04:38:09.118371    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:38:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:38:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:38:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:38:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:39:09 ha-925161 kubelet[1377]: E0719 04:39:09.118518    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:39:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:39:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:39:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:39:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:40:09 ha-925161 kubelet[1377]: E0719 04:40:09.119001    1377 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:40:09 ha-925161 kubelet[1377]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:40:09 ha-925161 kubelet[1377]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:40:09 ha-925161 kubelet[1377]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:40:09 ha-925161 kubelet[1377]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 04:40:16.678081  154257 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19302-122995/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-925161 -n ha-925161
helpers_test.go:261: (dbg) Run:  kubectl --context ha-925161 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (325.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-270078
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-270078
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-270078: exit status 82 (2m1.721944791s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-270078-m03"  ...
	* Stopping node "multinode-270078-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-270078" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-270078 --wait=true -v=8 --alsologtostderr
E0719 04:56:36.836298  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:59:39.880767  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-270078 --wait=true -v=8 --alsologtostderr: (3m21.171350854s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-270078
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-270078 -n multinode-270078
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-270078 logs -n 25: (1.499094901s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m02:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3247087681/001/cp-test_multinode-270078-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m02:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078:/home/docker/cp-test_multinode-270078-m02_multinode-270078.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078 sudo cat                                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m02_multinode-270078.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m02:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03:/home/docker/cp-test_multinode-270078-m02_multinode-270078-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078-m03 sudo cat                                   | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m02_multinode-270078-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp testdata/cp-test.txt                                                | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3247087681/001/cp-test_multinode-270078-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078:/home/docker/cp-test_multinode-270078-m03_multinode-270078.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078 sudo cat                                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m03_multinode-270078.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02:/home/docker/cp-test_multinode-270078-m03_multinode-270078-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078-m02 sudo cat                                   | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m03_multinode-270078-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-270078 node stop m03                                                          | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	| node    | multinode-270078 node start                                                             | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-270078                                                                | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:54 UTC |                     |
	| stop    | -p multinode-270078                                                                     | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:54 UTC |                     |
	| start   | -p multinode-270078                                                                     | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:56 UTC | 19 Jul 24 04:59 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-270078                                                                | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:59 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:56:32
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:56:32.830279  163442 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:56:32.830618  163442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:56:32.830630  163442 out.go:304] Setting ErrFile to fd 2...
	I0719 04:56:32.830636  163442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:56:32.830928  163442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:56:32.831673  163442 out.go:298] Setting JSON to false
	I0719 04:56:32.832844  163442 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9536,"bootTime":1721355457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 04:56:32.832909  163442 start.go:139] virtualization: kvm guest
	I0719 04:56:32.835362  163442 out.go:177] * [multinode-270078] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 04:56:32.836732  163442 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:56:32.836724  163442 notify.go:220] Checking for updates...
	I0719 04:56:32.839026  163442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:56:32.840137  163442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:56:32.841341  163442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:56:32.842563  163442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 04:56:32.843638  163442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:56:32.845208  163442 config.go:182] Loaded profile config "multinode-270078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:56:32.845324  163442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:56:32.845755  163442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:56:32.845810  163442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:56:32.860928  163442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
	I0719 04:56:32.861474  163442 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:56:32.862112  163442 main.go:141] libmachine: Using API Version  1
	I0719 04:56:32.862138  163442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:56:32.862556  163442 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:56:32.862780  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:56:32.898880  163442 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 04:56:32.900033  163442 start.go:297] selected driver: kvm2
	I0719 04:56:32.900061  163442 start.go:901] validating driver "kvm2" against &{Name:multinode-270078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-270078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:56:32.900257  163442 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:56:32.900705  163442 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:56:32.900808  163442 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 04:56:32.916182  163442 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 04:56:32.917182  163442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:56:32.917265  163442 cni.go:84] Creating CNI manager for ""
	I0719 04:56:32.917283  163442 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 04:56:32.917387  163442 start.go:340] cluster config:
	{Name:multinode-270078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-270078 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:56:32.917563  163442 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:56:32.920243  163442 out.go:177] * Starting "multinode-270078" primary control-plane node in "multinode-270078" cluster
	I0719 04:56:32.921396  163442 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:56:32.921440  163442 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 04:56:32.921455  163442 cache.go:56] Caching tarball of preloaded images
	I0719 04:56:32.921545  163442 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:56:32.921560  163442 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:56:32.921748  163442 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/config.json ...
	I0719 04:56:32.922004  163442 start.go:360] acquireMachinesLock for multinode-270078: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:56:32.922062  163442 start.go:364] duration metric: took 31.582µs to acquireMachinesLock for "multinode-270078"
	I0719 04:56:32.922080  163442 start.go:96] Skipping create...Using existing machine configuration
	I0719 04:56:32.922086  163442 fix.go:54] fixHost starting: 
	I0719 04:56:32.922523  163442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:56:32.922570  163442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:56:32.938482  163442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0719 04:56:32.938916  163442 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:56:32.939451  163442 main.go:141] libmachine: Using API Version  1
	I0719 04:56:32.939471  163442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:56:32.939845  163442 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:56:32.940083  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:56:32.940235  163442 main.go:141] libmachine: (multinode-270078) Calling .GetState
	I0719 04:56:32.941993  163442 fix.go:112] recreateIfNeeded on multinode-270078: state=Running err=<nil>
	W0719 04:56:32.942057  163442 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 04:56:32.944634  163442 out.go:177] * Updating the running kvm2 "multinode-270078" VM ...
	I0719 04:56:32.946117  163442 machine.go:94] provisionDockerMachine start ...
	I0719 04:56:32.946137  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:56:32.946348  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:32.949198  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:32.949771  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:32.949810  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:32.949949  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:32.950123  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:32.950310  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:32.950458  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:32.950642  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:56:32.950830  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:56:32.950843  163442 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:56:33.079669  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-270078
	
	I0719 04:56:33.079714  163442 main.go:141] libmachine: (multinode-270078) Calling .GetMachineName
	I0719 04:56:33.079950  163442 buildroot.go:166] provisioning hostname "multinode-270078"
	I0719 04:56:33.079983  163442 main.go:141] libmachine: (multinode-270078) Calling .GetMachineName
	I0719 04:56:33.080196  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.082932  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.083363  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.083391  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.083587  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:33.083808  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.083950  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.084119  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:33.084310  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:56:33.084488  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:56:33.084504  163442 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-270078 && echo "multinode-270078" | sudo tee /etc/hostname
	I0719 04:56:33.216867  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-270078
	
	I0719 04:56:33.216906  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.219587  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.220017  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.220050  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.220296  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:33.220519  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.220669  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.220813  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:33.220961  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:56:33.221203  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:56:33.221232  163442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-270078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-270078/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-270078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:56:33.333766  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:56:33.333800  163442 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:56:33.333828  163442 buildroot.go:174] setting up certificates
	I0719 04:56:33.333837  163442 provision.go:84] configureAuth start
	I0719 04:56:33.333849  163442 main.go:141] libmachine: (multinode-270078) Calling .GetMachineName
	I0719 04:56:33.334119  163442 main.go:141] libmachine: (multinode-270078) Calling .GetIP
	I0719 04:56:33.336703  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.337026  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.337049  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.337249  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.339292  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.339602  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.339632  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.339746  163442 provision.go:143] copyHostCerts
	I0719 04:56:33.339788  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:56:33.339826  163442 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:56:33.339845  163442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:56:33.339926  163442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:56:33.340095  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:56:33.340133  163442 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:56:33.340145  163442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:56:33.340196  163442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:56:33.340268  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:56:33.340291  163442 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:56:33.340298  163442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:56:33.340335  163442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:56:33.340791  163442 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.multinode-270078 san=[127.0.0.1 192.168.39.17 localhost minikube multinode-270078]
	I0719 04:56:33.522240  163442 provision.go:177] copyRemoteCerts
	I0719 04:56:33.522302  163442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:56:33.522328  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.524816  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.525185  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.525219  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.525368  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:33.525593  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.525767  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:33.525911  163442 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078/id_rsa Username:docker}
	I0719 04:56:33.616244  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:56:33.616318  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:56:33.642902  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:56:33.642982  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 04:56:33.665786  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:56:33.665863  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:56:33.687969  163442 provision.go:87] duration metric: took 354.118609ms to configureAuth
	I0719 04:56:33.688000  163442 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:56:33.688288  163442 config.go:182] Loaded profile config "multinode-270078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:56:33.688382  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.691181  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.691571  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.691601  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.691769  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:33.691949  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.692087  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.692244  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:33.692384  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:56:33.692552  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:56:33.692567  163442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:58:04.390851  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:58:04.390888  163442 machine.go:97] duration metric: took 1m31.44475532s to provisionDockerMachine
	I0719 04:58:04.390903  163442 start.go:293] postStartSetup for "multinode-270078" (driver="kvm2")
	I0719 04:58:04.390917  163442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:58:04.390939  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.391386  163442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:58:04.391426  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:58:04.394570  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.395015  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.395046  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.395233  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:58:04.395439  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.395628  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:58:04.395806  163442 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078/id_rsa Username:docker}
	I0719 04:58:04.483611  163442 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:58:04.487141  163442 command_runner.go:130] > NAME=Buildroot
	I0719 04:58:04.487162  163442 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 04:58:04.487167  163442 command_runner.go:130] > ID=buildroot
	I0719 04:58:04.487171  163442 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 04:58:04.487178  163442 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 04:58:04.487252  163442 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:58:04.487279  163442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:58:04.487351  163442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:58:04.487424  163442 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:58:04.487435  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:58:04.487512  163442 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:58:04.495690  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:58:04.517305  163442 start.go:296] duration metric: took 126.384948ms for postStartSetup
	I0719 04:58:04.517356  163442 fix.go:56] duration metric: took 1m31.59526608s for fixHost
	I0719 04:58:04.517380  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:58:04.520055  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.520384  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.520413  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.520554  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:58:04.520761  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.520926  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.521037  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:58:04.521199  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:58:04.521390  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:58:04.521403  163442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:58:04.633523  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721365084.609147770
	
	I0719 04:58:04.633554  163442 fix.go:216] guest clock: 1721365084.609147770
	I0719 04:58:04.633564  163442 fix.go:229] Guest: 2024-07-19 04:58:04.60914777 +0000 UTC Remote: 2024-07-19 04:58:04.517360877 +0000 UTC m=+91.724510886 (delta=91.786893ms)
	I0719 04:58:04.633585  163442 fix.go:200] guest clock delta is within tolerance: 91.786893ms
	I0719 04:58:04.633590  163442 start.go:83] releasing machines lock for "multinode-270078", held for 1m31.711518954s
	I0719 04:58:04.633608  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.633859  163442 main.go:141] libmachine: (multinode-270078) Calling .GetIP
	I0719 04:58:04.636442  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.636712  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.636737  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.636895  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.637469  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.637654  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.637743  163442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:58:04.637800  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:58:04.637844  163442 ssh_runner.go:195] Run: cat /version.json
	I0719 04:58:04.637868  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:58:04.640478  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.640715  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.640811  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.640848  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.640987  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:58:04.641148  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.641171  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.641179  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.641352  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:58:04.641355  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:58:04.641530  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.641519  163442 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078/id_rsa Username:docker}
	I0719 04:58:04.641651  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:58:04.641754  163442 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078/id_rsa Username:docker}
	I0719 04:58:04.721506  163442 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 04:58:04.722166  163442 ssh_runner.go:195] Run: systemctl --version
	I0719 04:58:04.758589  163442 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 04:58:04.758649  163442 command_runner.go:130] > systemd 252 (252)
	I0719 04:58:04.758678  163442 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 04:58:04.758745  163442 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:58:04.911513  163442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 04:58:04.919340  163442 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 04:58:04.919537  163442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:58:04.919625  163442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:58:04.928555  163442 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 04:58:04.928576  163442 start.go:495] detecting cgroup driver to use...
	I0719 04:58:04.928635  163442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:58:04.944454  163442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:58:04.958438  163442 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:58:04.958492  163442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:58:04.971279  163442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:58:04.984233  163442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:58:05.127274  163442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:58:05.262713  163442 docker.go:233] disabling docker service ...
	I0719 04:58:05.262797  163442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:58:05.282091  163442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:58:05.295744  163442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:58:05.435396  163442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:58:05.600680  163442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:58:05.615608  163442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:58:05.635625  163442 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0719 04:58:05.636109  163442 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:58:05.636168  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.646290  163442 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:58:05.646342  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.656166  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.665713  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.675673  163442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:58:05.685556  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.695661  163442 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.707302  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.717117  163442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:58:05.726233  163442 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 04:58:05.726297  163442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:58:05.735329  163442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:58:05.887627  163442 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:58:06.441429  163442 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:58:06.441506  163442 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:58:06.446113  163442 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0719 04:58:06.446138  163442 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 04:58:06.446144  163442 command_runner.go:130] > Device: 0,22	Inode: 1325        Links: 1
	I0719 04:58:06.446151  163442 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 04:58:06.446156  163442 command_runner.go:130] > Access: 2024-07-19 04:58:06.315470181 +0000
	I0719 04:58:06.446162  163442 command_runner.go:130] > Modify: 2024-07-19 04:58:06.315470181 +0000
	I0719 04:58:06.446169  163442 command_runner.go:130] > Change: 2024-07-19 04:58:06.315470181 +0000
	I0719 04:58:06.446174  163442 command_runner.go:130] >  Birth: -
	I0719 04:58:06.446197  163442 start.go:563] Will wait 60s for crictl version
	I0719 04:58:06.446241  163442 ssh_runner.go:195] Run: which crictl
	I0719 04:58:06.449797  163442 command_runner.go:130] > /usr/bin/crictl
	I0719 04:58:06.449853  163442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:58:06.483611  163442 command_runner.go:130] > Version:  0.1.0
	I0719 04:58:06.483639  163442 command_runner.go:130] > RuntimeName:  cri-o
	I0719 04:58:06.483647  163442 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0719 04:58:06.483655  163442 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 04:58:06.484615  163442 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:58:06.484698  163442 ssh_runner.go:195] Run: crio --version
	I0719 04:58:06.509895  163442 command_runner.go:130] > crio version 1.29.1
	I0719 04:58:06.509922  163442 command_runner.go:130] > Version:        1.29.1
	I0719 04:58:06.509928  163442 command_runner.go:130] > GitCommit:      unknown
	I0719 04:58:06.509933  163442 command_runner.go:130] > GitCommitDate:  unknown
	I0719 04:58:06.509936  163442 command_runner.go:130] > GitTreeState:   clean
	I0719 04:58:06.509946  163442 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 04:58:06.509950  163442 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 04:58:06.509954  163442 command_runner.go:130] > Compiler:       gc
	I0719 04:58:06.509958  163442 command_runner.go:130] > Platform:       linux/amd64
	I0719 04:58:06.509962  163442 command_runner.go:130] > Linkmode:       dynamic
	I0719 04:58:06.509966  163442 command_runner.go:130] > BuildTags:      
	I0719 04:58:06.509972  163442 command_runner.go:130] >   containers_image_ostree_stub
	I0719 04:58:06.509979  163442 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 04:58:06.509985  163442 command_runner.go:130] >   btrfs_noversion
	I0719 04:58:06.509994  163442 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 04:58:06.510002  163442 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 04:58:06.510009  163442 command_runner.go:130] >   seccomp
	I0719 04:58:06.510015  163442 command_runner.go:130] > LDFlags:          unknown
	I0719 04:58:06.510019  163442 command_runner.go:130] > SeccompEnabled:   true
	I0719 04:58:06.510023  163442 command_runner.go:130] > AppArmorEnabled:  false
	I0719 04:58:06.511273  163442 ssh_runner.go:195] Run: crio --version
	I0719 04:58:06.539108  163442 command_runner.go:130] > crio version 1.29.1
	I0719 04:58:06.539132  163442 command_runner.go:130] > Version:        1.29.1
	I0719 04:58:06.539150  163442 command_runner.go:130] > GitCommit:      unknown
	I0719 04:58:06.539155  163442 command_runner.go:130] > GitCommitDate:  unknown
	I0719 04:58:06.539159  163442 command_runner.go:130] > GitTreeState:   clean
	I0719 04:58:06.539164  163442 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 04:58:06.539171  163442 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 04:58:06.539175  163442 command_runner.go:130] > Compiler:       gc
	I0719 04:58:06.539181  163442 command_runner.go:130] > Platform:       linux/amd64
	I0719 04:58:06.539185  163442 command_runner.go:130] > Linkmode:       dynamic
	I0719 04:58:06.539189  163442 command_runner.go:130] > BuildTags:      
	I0719 04:58:06.539193  163442 command_runner.go:130] >   containers_image_ostree_stub
	I0719 04:58:06.539197  163442 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 04:58:06.539202  163442 command_runner.go:130] >   btrfs_noversion
	I0719 04:58:06.539208  163442 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 04:58:06.539212  163442 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 04:58:06.539220  163442 command_runner.go:130] >   seccomp
	I0719 04:58:06.539224  163442 command_runner.go:130] > LDFlags:          unknown
	I0719 04:58:06.539228  163442 command_runner.go:130] > SeccompEnabled:   true
	I0719 04:58:06.539234  163442 command_runner.go:130] > AppArmorEnabled:  false
	I0719 04:58:06.541253  163442 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:58:06.542602  163442 main.go:141] libmachine: (multinode-270078) Calling .GetIP
	I0719 04:58:06.545049  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:06.545387  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:06.545413  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:06.545570  163442 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:58:06.549395  163442 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0719 04:58:06.549647  163442 kubeadm.go:883] updating cluster {Name:multinode-270078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-270078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:58:06.549789  163442 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:58:06.549849  163442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:58:06.589767  163442 command_runner.go:130] > {
	I0719 04:58:06.589795  163442 command_runner.go:130] >   "images": [
	I0719 04:58:06.589801  163442 command_runner.go:130] >     {
	I0719 04:58:06.589813  163442 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 04:58:06.589820  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.589828  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 04:58:06.589834  163442 command_runner.go:130] >       ],
	I0719 04:58:06.589840  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.589853  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 04:58:06.589863  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 04:58:06.589873  163442 command_runner.go:130] >       ],
	I0719 04:58:06.589881  163442 command_runner.go:130] >       "size": "87165492",
	I0719 04:58:06.589890  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.589896  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.589912  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.589917  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.589921  163442 command_runner.go:130] >     },
	I0719 04:58:06.589924  163442 command_runner.go:130] >     {
	I0719 04:58:06.589930  163442 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 04:58:06.589935  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.589941  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 04:58:06.589946  163442 command_runner.go:130] >       ],
	I0719 04:58:06.589951  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.589958  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 04:58:06.589967  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 04:58:06.589973  163442 command_runner.go:130] >       ],
	I0719 04:58:06.589981  163442 command_runner.go:130] >       "size": "1363676",
	I0719 04:58:06.589988  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.590001  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590007  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590014  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590017  163442 command_runner.go:130] >     },
	I0719 04:58:06.590021  163442 command_runner.go:130] >     {
	I0719 04:58:06.590027  163442 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 04:58:06.590032  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590036  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 04:58:06.590040  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590045  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590052  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 04:58:06.590065  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 04:58:06.590072  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590079  163442 command_runner.go:130] >       "size": "31470524",
	I0719 04:58:06.590085  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.590092  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590098  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590108  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590114  163442 command_runner.go:130] >     },
	I0719 04:58:06.590122  163442 command_runner.go:130] >     {
	I0719 04:58:06.590132  163442 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 04:58:06.590140  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590145  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 04:58:06.590151  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590155  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590168  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 04:58:06.590188  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 04:58:06.590197  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590206  163442 command_runner.go:130] >       "size": "61245718",
	I0719 04:58:06.590217  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.590227  163442 command_runner.go:130] >       "username": "nonroot",
	I0719 04:58:06.590236  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590244  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590250  163442 command_runner.go:130] >     },
	I0719 04:58:06.590255  163442 command_runner.go:130] >     {
	I0719 04:58:06.590264  163442 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 04:58:06.590275  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590282  163442 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 04:58:06.590291  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590300  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590314  163442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 04:58:06.590327  163442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 04:58:06.590341  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590348  163442 command_runner.go:130] >       "size": "150779692",
	I0719 04:58:06.590354  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.590363  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.590372  163442 command_runner.go:130] >       },
	I0719 04:58:06.590379  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590389  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590399  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590407  163442 command_runner.go:130] >     },
	I0719 04:58:06.590415  163442 command_runner.go:130] >     {
	I0719 04:58:06.590426  163442 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 04:58:06.590436  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590444  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 04:58:06.590447  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590456  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590471  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 04:58:06.590486  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 04:58:06.590494  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590505  163442 command_runner.go:130] >       "size": "117609954",
	I0719 04:58:06.590514  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.590523  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.590531  163442 command_runner.go:130] >       },
	I0719 04:58:06.590538  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590543  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590551  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590561  163442 command_runner.go:130] >     },
	I0719 04:58:06.590566  163442 command_runner.go:130] >     {
	I0719 04:58:06.590579  163442 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 04:58:06.590589  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590601  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 04:58:06.590609  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590618  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590631  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 04:58:06.590644  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 04:58:06.590652  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590660  163442 command_runner.go:130] >       "size": "112198984",
	I0719 04:58:06.590669  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.590679  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.590687  163442 command_runner.go:130] >       },
	I0719 04:58:06.590694  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590703  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590712  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590718  163442 command_runner.go:130] >     },
	I0719 04:58:06.590722  163442 command_runner.go:130] >     {
	I0719 04:58:06.590734  163442 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 04:58:06.590744  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590752  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 04:58:06.590760  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590767  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590791  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 04:58:06.590806  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 04:58:06.590811  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590815  163442 command_runner.go:130] >       "size": "85953945",
	I0719 04:58:06.590818  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.590825  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590831  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590837  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590842  163442 command_runner.go:130] >     },
	I0719 04:58:06.590848  163442 command_runner.go:130] >     {
	I0719 04:58:06.590859  163442 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 04:58:06.590864  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590872  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 04:58:06.590878  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590884  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590896  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 04:58:06.590903  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 04:58:06.590908  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590914  163442 command_runner.go:130] >       "size": "63051080",
	I0719 04:58:06.590924  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.590930  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.590939  163442 command_runner.go:130] >       },
	I0719 04:58:06.590947  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590955  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590964  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590972  163442 command_runner.go:130] >     },
	I0719 04:58:06.590978  163442 command_runner.go:130] >     {
	I0719 04:58:06.590991  163442 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 04:58:06.590997  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.591002  163442 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 04:58:06.591011  163442 command_runner.go:130] >       ],
	I0719 04:58:06.591017  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.591031  163442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 04:58:06.591046  163442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 04:58:06.591055  163442 command_runner.go:130] >       ],
	I0719 04:58:06.591062  163442 command_runner.go:130] >       "size": "750414",
	I0719 04:58:06.591070  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.591077  163442 command_runner.go:130] >         "value": "65535"
	I0719 04:58:06.591084  163442 command_runner.go:130] >       },
	I0719 04:58:06.591089  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.591097  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.591107  163442 command_runner.go:130] >       "pinned": true
	I0719 04:58:06.591112  163442 command_runner.go:130] >     }
	I0719 04:58:06.591121  163442 command_runner.go:130] >   ]
	I0719 04:58:06.591129  163442 command_runner.go:130] > }
	I0719 04:58:06.591347  163442 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:58:06.591360  163442 crio.go:433] Images already preloaded, skipping extraction
	I0719 04:58:06.591419  163442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:58:06.622569  163442 command_runner.go:130] > {
	I0719 04:58:06.622597  163442 command_runner.go:130] >   "images": [
	I0719 04:58:06.622603  163442 command_runner.go:130] >     {
	I0719 04:58:06.622614  163442 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 04:58:06.622620  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.622628  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 04:58:06.622633  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622638  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.622650  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 04:58:06.622660  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 04:58:06.622666  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622677  163442 command_runner.go:130] >       "size": "87165492",
	I0719 04:58:06.622687  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.622696  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.622709  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.622716  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.622722  163442 command_runner.go:130] >     },
	I0719 04:58:06.622729  163442 command_runner.go:130] >     {
	I0719 04:58:06.622741  163442 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 04:58:06.622751  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.622765  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 04:58:06.622773  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622780  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.622796  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 04:58:06.622810  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 04:58:06.622819  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622827  163442 command_runner.go:130] >       "size": "1363676",
	I0719 04:58:06.622836  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.622846  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.622855  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.622864  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.622872  163442 command_runner.go:130] >     },
	I0719 04:58:06.622878  163442 command_runner.go:130] >     {
	I0719 04:58:06.622893  163442 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 04:58:06.622903  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.622913  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 04:58:06.622921  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622928  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.622944  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 04:58:06.622960  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 04:58:06.622969  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622977  163442 command_runner.go:130] >       "size": "31470524",
	I0719 04:58:06.622986  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.622995  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623005  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623015  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623022  163442 command_runner.go:130] >     },
	I0719 04:58:06.623029  163442 command_runner.go:130] >     {
	I0719 04:58:06.623041  163442 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 04:58:06.623049  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623059  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 04:58:06.623068  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623075  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623088  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 04:58:06.623105  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 04:58:06.623113  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623119  163442 command_runner.go:130] >       "size": "61245718",
	I0719 04:58:06.623125  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.623133  163442 command_runner.go:130] >       "username": "nonroot",
	I0719 04:58:06.623143  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623151  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623159  163442 command_runner.go:130] >     },
	I0719 04:58:06.623165  163442 command_runner.go:130] >     {
	I0719 04:58:06.623178  163442 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 04:58:06.623188  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623197  163442 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 04:58:06.623205  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623213  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623227  163442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 04:58:06.623244  163442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 04:58:06.623253  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623261  163442 command_runner.go:130] >       "size": "150779692",
	I0719 04:58:06.623270  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.623279  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.623287  163442 command_runner.go:130] >       },
	I0719 04:58:06.623295  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623304  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623314  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623322  163442 command_runner.go:130] >     },
	I0719 04:58:06.623338  163442 command_runner.go:130] >     {
	I0719 04:58:06.623350  163442 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 04:58:06.623359  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623370  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 04:58:06.623378  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623385  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623400  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 04:58:06.623417  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 04:58:06.623426  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623432  163442 command_runner.go:130] >       "size": "117609954",
	I0719 04:58:06.623439  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.623448  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.623454  163442 command_runner.go:130] >       },
	I0719 04:58:06.623464  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623473  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623481  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623489  163442 command_runner.go:130] >     },
	I0719 04:58:06.623495  163442 command_runner.go:130] >     {
	I0719 04:58:06.623508  163442 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 04:58:06.623518  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623529  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 04:58:06.623536  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623544  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623560  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 04:58:06.623574  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 04:58:06.623581  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623592  163442 command_runner.go:130] >       "size": "112198984",
	I0719 04:58:06.623601  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.623608  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.623616  163442 command_runner.go:130] >       },
	I0719 04:58:06.623624  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623633  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623642  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623650  163442 command_runner.go:130] >     },
	I0719 04:58:06.623656  163442 command_runner.go:130] >     {
	I0719 04:58:06.623669  163442 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 04:58:06.623679  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623691  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 04:58:06.623698  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623706  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623729  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 04:58:06.623744  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 04:58:06.623752  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623759  163442 command_runner.go:130] >       "size": "85953945",
	I0719 04:58:06.623768  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.623777  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623784  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623791  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623797  163442 command_runner.go:130] >     },
	I0719 04:58:06.623805  163442 command_runner.go:130] >     {
	I0719 04:58:06.623816  163442 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 04:58:06.623826  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623836  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 04:58:06.623845  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623852  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623868  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 04:58:06.623883  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 04:58:06.623891  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623897  163442 command_runner.go:130] >       "size": "63051080",
	I0719 04:58:06.623906  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.623913  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.623921  163442 command_runner.go:130] >       },
	I0719 04:58:06.623930  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623940  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623950  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623957  163442 command_runner.go:130] >     },
	I0719 04:58:06.623964  163442 command_runner.go:130] >     {
	I0719 04:58:06.623977  163442 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 04:58:06.623985  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623993  163442 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 04:58:06.624001  163442 command_runner.go:130] >       ],
	I0719 04:58:06.624009  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.624024  163442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 04:58:06.624040  163442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 04:58:06.624049  163442 command_runner.go:130] >       ],
	I0719 04:58:06.624057  163442 command_runner.go:130] >       "size": "750414",
	I0719 04:58:06.624066  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.624074  163442 command_runner.go:130] >         "value": "65535"
	I0719 04:58:06.624081  163442 command_runner.go:130] >       },
	I0719 04:58:06.624088  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.624097  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.624105  163442 command_runner.go:130] >       "pinned": true
	I0719 04:58:06.624112  163442 command_runner.go:130] >     }
	I0719 04:58:06.624118  163442 command_runner.go:130] >   ]
	I0719 04:58:06.624125  163442 command_runner.go:130] > }
	I0719 04:58:06.624245  163442 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:58:06.624259  163442 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:58:06.624269  163442 kubeadm.go:934] updating node { 192.168.39.17 8443 v1.30.3 crio true true} ...
	I0719 04:58:06.624398  163442 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-270078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-270078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:58:06.624486  163442 ssh_runner.go:195] Run: crio config
	I0719 04:58:06.662881  163442 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0719 04:58:06.662909  163442 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0719 04:58:06.662915  163442 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0719 04:58:06.662919  163442 command_runner.go:130] > #
	I0719 04:58:06.662927  163442 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0719 04:58:06.662936  163442 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0719 04:58:06.662946  163442 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0719 04:58:06.662956  163442 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0719 04:58:06.662961  163442 command_runner.go:130] > # reload'.
	I0719 04:58:06.662970  163442 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0719 04:58:06.662978  163442 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0719 04:58:06.662986  163442 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0719 04:58:06.663002  163442 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0719 04:58:06.663010  163442 command_runner.go:130] > [crio]
	I0719 04:58:06.663020  163442 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0719 04:58:06.663050  163442 command_runner.go:130] > # containers images, in this directory.
	I0719 04:58:06.663064  163442 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0719 04:58:06.663079  163442 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0719 04:58:06.663145  163442 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0719 04:58:06.663166  163442 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0719 04:58:06.663298  163442 command_runner.go:130] > # imagestore = ""
	I0719 04:58:06.663319  163442 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0719 04:58:06.663329  163442 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0719 04:58:06.663414  163442 command_runner.go:130] > storage_driver = "overlay"
	I0719 04:58:06.663428  163442 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0719 04:58:06.663439  163442 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0719 04:58:06.663448  163442 command_runner.go:130] > storage_option = [
	I0719 04:58:06.663585  163442 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0719 04:58:06.663597  163442 command_runner.go:130] > ]
	I0719 04:58:06.663609  163442 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0719 04:58:06.663624  163442 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0719 04:58:06.663907  163442 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0719 04:58:06.663924  163442 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0719 04:58:06.663930  163442 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0719 04:58:06.663935  163442 command_runner.go:130] > # always happen on a node reboot
	I0719 04:58:06.664131  163442 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0719 04:58:06.664151  163442 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0719 04:58:06.664160  163442 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0719 04:58:06.664171  163442 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0719 04:58:06.664277  163442 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0719 04:58:06.664299  163442 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0719 04:58:06.664312  163442 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0719 04:58:06.664510  163442 command_runner.go:130] > # internal_wipe = true
	I0719 04:58:06.664530  163442 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0719 04:58:06.664538  163442 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0719 04:58:06.664742  163442 command_runner.go:130] > # internal_repair = false
	I0719 04:58:06.664753  163442 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0719 04:58:06.664759  163442 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0719 04:58:06.664764  163442 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0719 04:58:06.665002  163442 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0719 04:58:06.665012  163442 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0719 04:58:06.665016  163442 command_runner.go:130] > [crio.api]
	I0719 04:58:06.665021  163442 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0719 04:58:06.665303  163442 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0719 04:58:06.665323  163442 command_runner.go:130] > # IP address on which the stream server will listen.
	I0719 04:58:06.665397  163442 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0719 04:58:06.665420  163442 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0719 04:58:06.665430  163442 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0719 04:58:06.665641  163442 command_runner.go:130] > # stream_port = "0"
	I0719 04:58:06.665658  163442 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0719 04:58:06.665932  163442 command_runner.go:130] > # stream_enable_tls = false
	I0719 04:58:06.665949  163442 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0719 04:58:06.666348  163442 command_runner.go:130] > # stream_idle_timeout = ""
	I0719 04:58:06.666368  163442 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0719 04:58:06.666382  163442 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0719 04:58:06.666391  163442 command_runner.go:130] > # minutes.
	I0719 04:58:06.666398  163442 command_runner.go:130] > # stream_tls_cert = ""
	I0719 04:58:06.666409  163442 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0719 04:58:06.666419  163442 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0719 04:58:06.666467  163442 command_runner.go:130] > # stream_tls_key = ""
	I0719 04:58:06.666488  163442 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0719 04:58:06.666499  163442 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0719 04:58:06.666520  163442 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0719 04:58:06.666530  163442 command_runner.go:130] > # stream_tls_ca = ""
	I0719 04:58:06.666538  163442 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 04:58:06.666546  163442 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0719 04:58:06.666553  163442 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 04:58:06.666560  163442 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0719 04:58:06.666566  163442 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0719 04:58:06.666575  163442 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0719 04:58:06.666584  163442 command_runner.go:130] > [crio.runtime]
	I0719 04:58:06.666596  163442 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0719 04:58:06.666606  163442 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0719 04:58:06.666616  163442 command_runner.go:130] > # "nofile=1024:2048"
	I0719 04:58:06.666625  163442 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0719 04:58:06.666636  163442 command_runner.go:130] > # default_ulimits = [
	I0719 04:58:06.666644  163442 command_runner.go:130] > # ]
	I0719 04:58:06.666653  163442 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0719 04:58:06.666905  163442 command_runner.go:130] > # no_pivot = false
	I0719 04:58:06.666915  163442 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0719 04:58:06.666921  163442 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0719 04:58:06.667135  163442 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0719 04:58:06.667149  163442 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0719 04:58:06.667154  163442 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0719 04:58:06.667163  163442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 04:58:06.667273  163442 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0719 04:58:06.667289  163442 command_runner.go:130] > # Cgroup setting for conmon
	I0719 04:58:06.667300  163442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0719 04:58:06.667426  163442 command_runner.go:130] > conmon_cgroup = "pod"
	I0719 04:58:06.667445  163442 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0719 04:58:06.667454  163442 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0719 04:58:06.667467  163442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 04:58:06.667476  163442 command_runner.go:130] > conmon_env = [
	I0719 04:58:06.667520  163442 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 04:58:06.667547  163442 command_runner.go:130] > ]
	I0719 04:58:06.667561  163442 command_runner.go:130] > # Additional environment variables to set for all the
	I0719 04:58:06.667569  163442 command_runner.go:130] > # containers. These are overridden if set in the
	I0719 04:58:06.667582  163442 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0719 04:58:06.667685  163442 command_runner.go:130] > # default_env = [
	I0719 04:58:06.667869  163442 command_runner.go:130] > # ]
	I0719 04:58:06.667889  163442 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0719 04:58:06.667902  163442 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0719 04:58:06.668062  163442 command_runner.go:130] > # selinux = false
	I0719 04:58:06.668081  163442 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0719 04:58:06.668090  163442 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0719 04:58:06.668099  163442 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0719 04:58:06.668192  163442 command_runner.go:130] > # seccomp_profile = ""
	I0719 04:58:06.668205  163442 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0719 04:58:06.668238  163442 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0719 04:58:06.668255  163442 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0719 04:58:06.668263  163442 command_runner.go:130] > # which might increase security.
	I0719 04:58:06.668271  163442 command_runner.go:130] > # This option is currently deprecated,
	I0719 04:58:06.668280  163442 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0719 04:58:06.668328  163442 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0719 04:58:06.668345  163442 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0719 04:58:06.668355  163442 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0719 04:58:06.668369  163442 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0719 04:58:06.668383  163442 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0719 04:58:06.668392  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.668536  163442 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0719 04:58:06.668546  163442 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0719 04:58:06.668550  163442 command_runner.go:130] > # the cgroup blockio controller.
	I0719 04:58:06.668684  163442 command_runner.go:130] > # blockio_config_file = ""
	I0719 04:58:06.668700  163442 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0719 04:58:06.668707  163442 command_runner.go:130] > # blockio parameters.
	I0719 04:58:06.668955  163442 command_runner.go:130] > # blockio_reload = false
	I0719 04:58:06.668974  163442 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0719 04:58:06.668980  163442 command_runner.go:130] > # irqbalance daemon.
	I0719 04:58:06.669201  163442 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0719 04:58:06.669217  163442 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0719 04:58:06.669227  163442 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0719 04:58:06.669238  163442 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0719 04:58:06.669427  163442 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0719 04:58:06.669447  163442 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0719 04:58:06.669456  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.669654  163442 command_runner.go:130] > # rdt_config_file = ""
	I0719 04:58:06.669673  163442 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0719 04:58:06.669747  163442 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0719 04:58:06.669787  163442 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0719 04:58:06.669981  163442 command_runner.go:130] > # separate_pull_cgroup = ""
	I0719 04:58:06.669997  163442 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0719 04:58:06.670009  163442 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0719 04:58:06.670016  163442 command_runner.go:130] > # will be added.
	I0719 04:58:06.670100  163442 command_runner.go:130] > # default_capabilities = [
	I0719 04:58:06.670241  163442 command_runner.go:130] > # 	"CHOWN",
	I0719 04:58:06.670363  163442 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0719 04:58:06.670475  163442 command_runner.go:130] > # 	"FSETID",
	I0719 04:58:06.670594  163442 command_runner.go:130] > # 	"FOWNER",
	I0719 04:58:06.670715  163442 command_runner.go:130] > # 	"SETGID",
	I0719 04:58:06.670882  163442 command_runner.go:130] > # 	"SETUID",
	I0719 04:58:06.670990  163442 command_runner.go:130] > # 	"SETPCAP",
	I0719 04:58:06.671106  163442 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0719 04:58:06.671223  163442 command_runner.go:130] > # 	"KILL",
	I0719 04:58:06.671332  163442 command_runner.go:130] > # ]
	I0719 04:58:06.671348  163442 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0719 04:58:06.671364  163442 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0719 04:58:06.671553  163442 command_runner.go:130] > # add_inheritable_capabilities = false
	I0719 04:58:06.671567  163442 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0719 04:58:06.671576  163442 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 04:58:06.671583  163442 command_runner.go:130] > default_sysctls = [
	I0719 04:58:06.671633  163442 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0719 04:58:06.671670  163442 command_runner.go:130] > ]
	I0719 04:58:06.671681  163442 command_runner.go:130] > # List of devices on the host that a
	I0719 04:58:06.671691  163442 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0719 04:58:06.671797  163442 command_runner.go:130] > # allowed_devices = [
	I0719 04:58:06.671969  163442 command_runner.go:130] > # 	"/dev/fuse",
	I0719 04:58:06.671980  163442 command_runner.go:130] > # ]
	I0719 04:58:06.671989  163442 command_runner.go:130] > # List of additional devices. specified as
	I0719 04:58:06.671999  163442 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0719 04:58:06.672010  163442 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0719 04:58:06.672021  163442 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 04:58:06.672030  163442 command_runner.go:130] > # additional_devices = [
	I0719 04:58:06.672037  163442 command_runner.go:130] > # ]
	I0719 04:58:06.672049  163442 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0719 04:58:06.672057  163442 command_runner.go:130] > # cdi_spec_dirs = [
	I0719 04:58:06.672067  163442 command_runner.go:130] > # 	"/etc/cdi",
	I0719 04:58:06.672073  163442 command_runner.go:130] > # 	"/var/run/cdi",
	I0719 04:58:06.672082  163442 command_runner.go:130] > # ]
	I0719 04:58:06.672092  163442 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0719 04:58:06.672101  163442 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0719 04:58:06.672106  163442 command_runner.go:130] > # Defaults to false.
	I0719 04:58:06.672117  163442 command_runner.go:130] > # device_ownership_from_security_context = false
	I0719 04:58:06.672129  163442 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0719 04:58:06.672141  163442 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0719 04:58:06.672151  163442 command_runner.go:130] > # hooks_dir = [
	I0719 04:58:06.672161  163442 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0719 04:58:06.672168  163442 command_runner.go:130] > # ]
	I0719 04:58:06.672177  163442 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0719 04:58:06.672187  163442 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0719 04:58:06.672193  163442 command_runner.go:130] > # its default mounts from the following two files:
	I0719 04:58:06.672200  163442 command_runner.go:130] > #
	I0719 04:58:06.672210  163442 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0719 04:58:06.672224  163442 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0719 04:58:06.672232  163442 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0719 04:58:06.672240  163442 command_runner.go:130] > #
	I0719 04:58:06.672249  163442 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0719 04:58:06.672263  163442 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0719 04:58:06.672276  163442 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0719 04:58:06.672287  163442 command_runner.go:130] > #      only add mounts it finds in this file.
	I0719 04:58:06.672295  163442 command_runner.go:130] > #
	I0719 04:58:06.672303  163442 command_runner.go:130] > # default_mounts_file = ""
	I0719 04:58:06.672314  163442 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0719 04:58:06.672329  163442 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0719 04:58:06.672342  163442 command_runner.go:130] > pids_limit = 1024
	I0719 04:58:06.672355  163442 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0719 04:58:06.672368  163442 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0719 04:58:06.672381  163442 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0719 04:58:06.672392  163442 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0719 04:58:06.672404  163442 command_runner.go:130] > # log_size_max = -1
	I0719 04:58:06.672416  163442 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0719 04:58:06.672529  163442 command_runner.go:130] > # log_to_journald = false
	I0719 04:58:06.672549  163442 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0719 04:58:06.672560  163442 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0719 04:58:06.672570  163442 command_runner.go:130] > # Path to directory for container attach sockets.
	I0719 04:58:06.672580  163442 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0719 04:58:06.672589  163442 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0719 04:58:06.672596  163442 command_runner.go:130] > # bind_mount_prefix = ""
	I0719 04:58:06.672606  163442 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0719 04:58:06.672616  163442 command_runner.go:130] > # read_only = false
	I0719 04:58:06.672627  163442 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0719 04:58:06.672642  163442 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0719 04:58:06.672652  163442 command_runner.go:130] > # live configuration reload.
	I0719 04:58:06.672659  163442 command_runner.go:130] > # log_level = "info"
	I0719 04:58:06.672672  163442 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0719 04:58:06.672683  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.672693  163442 command_runner.go:130] > # log_filter = ""
	I0719 04:58:06.672703  163442 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0719 04:58:06.672717  163442 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0719 04:58:06.672723  163442 command_runner.go:130] > # separated by comma.
	I0719 04:58:06.672733  163442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 04:58:06.672742  163442 command_runner.go:130] > # uid_mappings = ""
	I0719 04:58:06.672752  163442 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0719 04:58:06.672821  163442 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0719 04:58:06.672839  163442 command_runner.go:130] > # separated by comma.
	I0719 04:58:06.672852  163442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 04:58:06.672866  163442 command_runner.go:130] > # gid_mappings = ""
	I0719 04:58:06.672903  163442 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0719 04:58:06.672935  163442 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 04:58:06.672947  163442 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 04:58:06.672963  163442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 04:58:06.672972  163442 command_runner.go:130] > # minimum_mappable_uid = -1
	I0719 04:58:06.672983  163442 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0719 04:58:06.672995  163442 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 04:58:06.673008  163442 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 04:58:06.673032  163442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 04:58:06.673045  163442 command_runner.go:130] > # minimum_mappable_gid = -1
	I0719 04:58:06.673057  163442 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0719 04:58:06.673084  163442 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0719 04:58:06.673096  163442 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0719 04:58:06.673106  163442 command_runner.go:130] > # ctr_stop_timeout = 30
	I0719 04:58:06.673115  163442 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0719 04:58:06.673128  163442 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0719 04:58:06.673138  163442 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0719 04:58:06.673148  163442 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0719 04:58:06.673155  163442 command_runner.go:130] > drop_infra_ctr = false
	I0719 04:58:06.673168  163442 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0719 04:58:06.673183  163442 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0719 04:58:06.673197  163442 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0719 04:58:06.673210  163442 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0719 04:58:06.673224  163442 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0719 04:58:06.673236  163442 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0719 04:58:06.673249  163442 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0719 04:58:06.673260  163442 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0719 04:58:06.673269  163442 command_runner.go:130] > # shared_cpuset = ""
	I0719 04:58:06.673279  163442 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0719 04:58:06.673290  163442 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0719 04:58:06.673299  163442 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0719 04:58:06.673313  163442 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0719 04:58:06.673323  163442 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0719 04:58:06.673333  163442 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0719 04:58:06.673344  163442 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0719 04:58:06.673355  163442 command_runner.go:130] > # enable_criu_support = false
	I0719 04:58:06.673366  163442 command_runner.go:130] > # Enable/disable the generation of the container,
	I0719 04:58:06.673378  163442 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0719 04:58:06.673388  163442 command_runner.go:130] > # enable_pod_events = false
	I0719 04:58:06.673401  163442 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 04:58:06.673414  163442 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 04:58:06.673422  163442 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0719 04:58:06.673432  163442 command_runner.go:130] > # default_runtime = "runc"
	I0719 04:58:06.673441  163442 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0719 04:58:06.673457  163442 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0719 04:58:06.673473  163442 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0719 04:58:06.673484  163442 command_runner.go:130] > # creation as a file is not desired either.
	I0719 04:58:06.673501  163442 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0719 04:58:06.673512  163442 command_runner.go:130] > # the hostname is being managed dynamically.
	I0719 04:58:06.673523  163442 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0719 04:58:06.673533  163442 command_runner.go:130] > # ]
	I0719 04:58:06.673545  163442 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0719 04:58:06.673557  163442 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0719 04:58:06.673564  163442 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0719 04:58:06.673569  163442 command_runner.go:130] > # Each entry in the table should follow the format:
	I0719 04:58:06.673573  163442 command_runner.go:130] > #
	I0719 04:58:06.673579  163442 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0719 04:58:06.673587  163442 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0719 04:58:06.673606  163442 command_runner.go:130] > # runtime_type = "oci"
	I0719 04:58:06.673613  163442 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0719 04:58:06.673617  163442 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0719 04:58:06.673624  163442 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0719 04:58:06.673631  163442 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0719 04:58:06.673640  163442 command_runner.go:130] > # monitor_env = []
	I0719 04:58:06.673651  163442 command_runner.go:130] > # privileged_without_host_devices = false
	I0719 04:58:06.673661  163442 command_runner.go:130] > # allowed_annotations = []
	I0719 04:58:06.673669  163442 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0719 04:58:06.673678  163442 command_runner.go:130] > # Where:
	I0719 04:58:06.673687  163442 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0719 04:58:06.673699  163442 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0719 04:58:06.673707  163442 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0719 04:58:06.673713  163442 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0719 04:58:06.673719  163442 command_runner.go:130] > #   in $PATH.
	I0719 04:58:06.673725  163442 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0719 04:58:06.673732  163442 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0719 04:58:06.673738  163442 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0719 04:58:06.673744  163442 command_runner.go:130] > #   state.
	I0719 04:58:06.673750  163442 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0719 04:58:06.673757  163442 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0719 04:58:06.673764  163442 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0719 04:58:06.673769  163442 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0719 04:58:06.673777  163442 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0719 04:58:06.673783  163442 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0719 04:58:06.673789  163442 command_runner.go:130] > #   The currently recognized values are:
	I0719 04:58:06.673795  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0719 04:58:06.673804  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0719 04:58:06.673811  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0719 04:58:06.673817  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0719 04:58:06.673827  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0719 04:58:06.673833  163442 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0719 04:58:06.673841  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0719 04:58:06.673848  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0719 04:58:06.673857  163442 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0719 04:58:06.673862  163442 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0719 04:58:06.673866  163442 command_runner.go:130] > #   deprecated option "conmon".
	I0719 04:58:06.673875  163442 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0719 04:58:06.673881  163442 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0719 04:58:06.673888  163442 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0719 04:58:06.673894  163442 command_runner.go:130] > #   should be moved to the container's cgroup
	I0719 04:58:06.673900  163442 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0719 04:58:06.673907  163442 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0719 04:58:06.673913  163442 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0719 04:58:06.673920  163442 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0719 04:58:06.673923  163442 command_runner.go:130] > #
	I0719 04:58:06.673930  163442 command_runner.go:130] > # Using the seccomp notifier feature:
	I0719 04:58:06.673933  163442 command_runner.go:130] > #
	I0719 04:58:06.673941  163442 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0719 04:58:06.673947  163442 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0719 04:58:06.673952  163442 command_runner.go:130] > #
	I0719 04:58:06.673958  163442 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0719 04:58:06.673964  163442 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0719 04:58:06.673969  163442 command_runner.go:130] > #
	I0719 04:58:06.673974  163442 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0719 04:58:06.673981  163442 command_runner.go:130] > # feature.
	I0719 04:58:06.673983  163442 command_runner.go:130] > #
	I0719 04:58:06.673989  163442 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0719 04:58:06.673997  163442 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0719 04:58:06.674006  163442 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0719 04:58:06.674040  163442 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0719 04:58:06.674048  163442 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0719 04:58:06.674052  163442 command_runner.go:130] > #
	I0719 04:58:06.674057  163442 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0719 04:58:06.674065  163442 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0719 04:58:06.674068  163442 command_runner.go:130] > #
	I0719 04:58:06.674074  163442 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0719 04:58:06.674081  163442 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0719 04:58:06.674084  163442 command_runner.go:130] > #
	I0719 04:58:06.674092  163442 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0719 04:58:06.674098  163442 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0719 04:58:06.674104  163442 command_runner.go:130] > # limitation.
	I0719 04:58:06.674108  163442 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0719 04:58:06.674114  163442 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0719 04:58:06.674118  163442 command_runner.go:130] > runtime_type = "oci"
	I0719 04:58:06.674124  163442 command_runner.go:130] > runtime_root = "/run/runc"
	I0719 04:58:06.674129  163442 command_runner.go:130] > runtime_config_path = ""
	I0719 04:58:06.674135  163442 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0719 04:58:06.674139  163442 command_runner.go:130] > monitor_cgroup = "pod"
	I0719 04:58:06.674143  163442 command_runner.go:130] > monitor_exec_cgroup = ""
	I0719 04:58:06.674147  163442 command_runner.go:130] > monitor_env = [
	I0719 04:58:06.674156  163442 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 04:58:06.674161  163442 command_runner.go:130] > ]
	I0719 04:58:06.674166  163442 command_runner.go:130] > privileged_without_host_devices = false
	I0719 04:58:06.674173  163442 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0719 04:58:06.674178  163442 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0719 04:58:06.674185  163442 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0719 04:58:06.674193  163442 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0719 04:58:06.674202  163442 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0719 04:58:06.674208  163442 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0719 04:58:06.674219  163442 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0719 04:58:06.674232  163442 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0719 04:58:06.674240  163442 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0719 04:58:06.674246  163442 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0719 04:58:06.674252  163442 command_runner.go:130] > # Example:
	I0719 04:58:06.674257  163442 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0719 04:58:06.674262  163442 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0719 04:58:06.674269  163442 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0719 04:58:06.674274  163442 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0719 04:58:06.674281  163442 command_runner.go:130] > # cpuset = 0
	I0719 04:58:06.674284  163442 command_runner.go:130] > # cpushares = "0-1"
	I0719 04:58:06.674287  163442 command_runner.go:130] > # Where:
	I0719 04:58:06.674292  163442 command_runner.go:130] > # The workload name is workload-type.
	I0719 04:58:06.674299  163442 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0719 04:58:06.674306  163442 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0719 04:58:06.674311  163442 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0719 04:58:06.674322  163442 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0719 04:58:06.674330  163442 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0719 04:58:06.674336  163442 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0719 04:58:06.674345  163442 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0719 04:58:06.674349  163442 command_runner.go:130] > # Default value is set to true
	I0719 04:58:06.674355  163442 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0719 04:58:06.674360  163442 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0719 04:58:06.674369  163442 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0719 04:58:06.674376  163442 command_runner.go:130] > # Default value is set to 'false'
	I0719 04:58:06.674380  163442 command_runner.go:130] > # disable_hostport_mapping = false
	I0719 04:58:06.674388  163442 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0719 04:58:06.674392  163442 command_runner.go:130] > #
	I0719 04:58:06.674400  163442 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0719 04:58:06.674407  163442 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0719 04:58:06.674415  163442 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0719 04:58:06.674421  163442 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0719 04:58:06.674426  163442 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0719 04:58:06.674429  163442 command_runner.go:130] > [crio.image]
	I0719 04:58:06.674435  163442 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0719 04:58:06.674439  163442 command_runner.go:130] > # default_transport = "docker://"
	I0719 04:58:06.674445  163442 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0719 04:58:06.674450  163442 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0719 04:58:06.674454  163442 command_runner.go:130] > # global_auth_file = ""
	I0719 04:58:06.674458  163442 command_runner.go:130] > # The image used to instantiate infra containers.
	I0719 04:58:06.674463  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.674467  163442 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0719 04:58:06.674473  163442 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0719 04:58:06.674478  163442 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0719 04:58:06.674482  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.674486  163442 command_runner.go:130] > # pause_image_auth_file = ""
	I0719 04:58:06.674491  163442 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0719 04:58:06.674496  163442 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0719 04:58:06.674502  163442 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0719 04:58:06.674507  163442 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0719 04:58:06.674511  163442 command_runner.go:130] > # pause_command = "/pause"
	I0719 04:58:06.674516  163442 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0719 04:58:06.674521  163442 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0719 04:58:06.674527  163442 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0719 04:58:06.674532  163442 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0719 04:58:06.674537  163442 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0719 04:58:06.674542  163442 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0719 04:58:06.674545  163442 command_runner.go:130] > # pinned_images = [
	I0719 04:58:06.674548  163442 command_runner.go:130] > # ]
	I0719 04:58:06.674553  163442 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0719 04:58:06.674559  163442 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0719 04:58:06.674564  163442 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0719 04:58:06.674570  163442 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0719 04:58:06.674575  163442 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0719 04:58:06.674578  163442 command_runner.go:130] > # signature_policy = ""
	I0719 04:58:06.674583  163442 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0719 04:58:06.674593  163442 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0719 04:58:06.674598  163442 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0719 04:58:06.674603  163442 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0719 04:58:06.674608  163442 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0719 04:58:06.674612  163442 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0719 04:58:06.674618  163442 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0719 04:58:06.674624  163442 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0719 04:58:06.674627  163442 command_runner.go:130] > # changing them here.
	I0719 04:58:06.674631  163442 command_runner.go:130] > # insecure_registries = [
	I0719 04:58:06.674635  163442 command_runner.go:130] > # ]
	I0719 04:58:06.674640  163442 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0719 04:58:06.674645  163442 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0719 04:58:06.674649  163442 command_runner.go:130] > # image_volumes = "mkdir"
	I0719 04:58:06.674653  163442 command_runner.go:130] > # Temporary directory to use for storing big files
	I0719 04:58:06.674659  163442 command_runner.go:130] > # big_files_temporary_dir = ""
	I0719 04:58:06.674667  163442 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0719 04:58:06.674671  163442 command_runner.go:130] > # CNI plugins.
	I0719 04:58:06.674677  163442 command_runner.go:130] > [crio.network]
	I0719 04:58:06.674683  163442 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0719 04:58:06.674690  163442 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0719 04:58:06.674693  163442 command_runner.go:130] > # cni_default_network = ""
	I0719 04:58:06.674699  163442 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0719 04:58:06.674706  163442 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0719 04:58:06.674711  163442 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0719 04:58:06.674717  163442 command_runner.go:130] > # plugin_dirs = [
	I0719 04:58:06.674721  163442 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0719 04:58:06.674726  163442 command_runner.go:130] > # ]
	I0719 04:58:06.674732  163442 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0719 04:58:06.674737  163442 command_runner.go:130] > [crio.metrics]
	I0719 04:58:06.674743  163442 command_runner.go:130] > # Globally enable or disable metrics support.
	I0719 04:58:06.674749  163442 command_runner.go:130] > enable_metrics = true
	I0719 04:58:06.674754  163442 command_runner.go:130] > # Specify enabled metrics collectors.
	I0719 04:58:06.674761  163442 command_runner.go:130] > # Per default all metrics are enabled.
	I0719 04:58:06.674766  163442 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0719 04:58:06.674775  163442 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0719 04:58:06.674780  163442 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0719 04:58:06.674787  163442 command_runner.go:130] > # metrics_collectors = [
	I0719 04:58:06.674791  163442 command_runner.go:130] > # 	"operations",
	I0719 04:58:06.674798  163442 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0719 04:58:06.674803  163442 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0719 04:58:06.674809  163442 command_runner.go:130] > # 	"operations_errors",
	I0719 04:58:06.674815  163442 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0719 04:58:06.674829  163442 command_runner.go:130] > # 	"image_pulls_by_name",
	I0719 04:58:06.674840  163442 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0719 04:58:06.674848  163442 command_runner.go:130] > # 	"image_pulls_failures",
	I0719 04:58:06.674854  163442 command_runner.go:130] > # 	"image_pulls_successes",
	I0719 04:58:06.674859  163442 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0719 04:58:06.674866  163442 command_runner.go:130] > # 	"image_layer_reuse",
	I0719 04:58:06.674870  163442 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0719 04:58:06.674876  163442 command_runner.go:130] > # 	"containers_oom_total",
	I0719 04:58:06.674885  163442 command_runner.go:130] > # 	"containers_oom",
	I0719 04:58:06.674891  163442 command_runner.go:130] > # 	"processes_defunct",
	I0719 04:58:06.674901  163442 command_runner.go:130] > # 	"operations_total",
	I0719 04:58:06.674908  163442 command_runner.go:130] > # 	"operations_latency_seconds",
	I0719 04:58:06.674916  163442 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0719 04:58:06.674920  163442 command_runner.go:130] > # 	"operations_errors_total",
	I0719 04:58:06.674926  163442 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0719 04:58:06.674930  163442 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0719 04:58:06.674936  163442 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0719 04:58:06.674942  163442 command_runner.go:130] > # 	"image_pulls_success_total",
	I0719 04:58:06.674946  163442 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0719 04:58:06.674953  163442 command_runner.go:130] > # 	"containers_oom_count_total",
	I0719 04:58:06.674957  163442 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0719 04:58:06.674966  163442 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0719 04:58:06.674974  163442 command_runner.go:130] > # ]
	I0719 04:58:06.674983  163442 command_runner.go:130] > # The port on which the metrics server will listen.
	I0719 04:58:06.674993  163442 command_runner.go:130] > # metrics_port = 9090
	I0719 04:58:06.675003  163442 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0719 04:58:06.675011  163442 command_runner.go:130] > # metrics_socket = ""
	I0719 04:58:06.675021  163442 command_runner.go:130] > # The certificate for the secure metrics server.
	I0719 04:58:06.675029  163442 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0719 04:58:06.675035  163442 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0719 04:58:06.675042  163442 command_runner.go:130] > # certificate on any modification event.
	I0719 04:58:06.675046  163442 command_runner.go:130] > # metrics_cert = ""
	I0719 04:58:06.675056  163442 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0719 04:58:06.675065  163442 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0719 04:58:06.675075  163442 command_runner.go:130] > # metrics_key = ""
	I0719 04:58:06.675086  163442 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0719 04:58:06.675095  163442 command_runner.go:130] > [crio.tracing]
	I0719 04:58:06.675104  163442 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0719 04:58:06.675113  163442 command_runner.go:130] > # enable_tracing = false
	I0719 04:58:06.675119  163442 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0719 04:58:06.675125  163442 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0719 04:58:06.675131  163442 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0719 04:58:06.675140  163442 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0719 04:58:06.675149  163442 command_runner.go:130] > # CRI-O NRI configuration.
	I0719 04:58:06.675158  163442 command_runner.go:130] > [crio.nri]
	I0719 04:58:06.675166  163442 command_runner.go:130] > # Globally enable or disable NRI.
	I0719 04:58:06.675174  163442 command_runner.go:130] > # enable_nri = false
	I0719 04:58:06.675183  163442 command_runner.go:130] > # NRI socket to listen on.
	I0719 04:58:06.675191  163442 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0719 04:58:06.675201  163442 command_runner.go:130] > # NRI plugin directory to use.
	I0719 04:58:06.675209  163442 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0719 04:58:06.675217  163442 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0719 04:58:06.675224  163442 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0719 04:58:06.675236  163442 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0719 04:58:06.675246  163442 command_runner.go:130] > # nri_disable_connections = false
	I0719 04:58:06.675256  163442 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0719 04:58:06.675267  163442 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0719 04:58:06.675277  163442 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0719 04:58:06.675286  163442 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0719 04:58:06.675297  163442 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0719 04:58:06.675303  163442 command_runner.go:130] > [crio.stats]
	I0719 04:58:06.675311  163442 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0719 04:58:06.675323  163442 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0719 04:58:06.675332  163442 command_runner.go:130] > # stats_collection_period = 0
	I0719 04:58:06.675362  163442 command_runner.go:130] ! time="2024-07-19 04:58:06.630452421Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0719 04:58:06.675382  163442 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0719 04:58:06.675505  163442 cni.go:84] Creating CNI manager for ""
	I0719 04:58:06.675515  163442 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 04:58:06.675525  163442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:58:06.675558  163442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-270078 NodeName:multinode-270078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:58:06.675718  163442 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-270078"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:58:06.675791  163442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:58:06.685278  163442 command_runner.go:130] > kubeadm
	I0719 04:58:06.685296  163442 command_runner.go:130] > kubectl
	I0719 04:58:06.685301  163442 command_runner.go:130] > kubelet
	I0719 04:58:06.685391  163442 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:58:06.685443  163442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 04:58:06.694231  163442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0719 04:58:06.709460  163442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:58:06.724369  163442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 04:58:06.739391  163442 ssh_runner.go:195] Run: grep 192.168.39.17	control-plane.minikube.internal$ /etc/hosts
	I0719 04:58:06.742767  163442 command_runner.go:130] > 192.168.39.17	control-plane.minikube.internal
	I0719 04:58:06.742839  163442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:58:06.874255  163442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:58:06.888174  163442 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078 for IP: 192.168.39.17
	I0719 04:58:06.888200  163442 certs.go:194] generating shared ca certs ...
	I0719 04:58:06.888222  163442 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:58:06.888412  163442 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:58:06.888465  163442 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:58:06.888477  163442 certs.go:256] generating profile certs ...
	I0719 04:58:06.888557  163442 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/client.key
	I0719 04:58:06.888613  163442 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.key.4ebc0a81
	I0719 04:58:06.888645  163442 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.key
	I0719 04:58:06.888655  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:58:06.888667  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:58:06.888680  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:58:06.888692  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:58:06.888705  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:58:06.888715  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:58:06.888726  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:58:06.888747  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:58:06.888805  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:58:06.888845  163442 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:58:06.888860  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:58:06.888884  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:58:06.888911  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:58:06.888931  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:58:06.888969  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:58:06.888997  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:06.889010  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:58:06.889022  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:58:06.889600  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:58:06.911452  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:58:06.933215  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:58:06.954317  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:58:06.976640  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 04:58:06.998183  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:58:07.021306  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:58:07.044551  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:58:07.068942  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:58:07.091080  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:58:07.112817  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:58:07.134513  163442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:58:07.150104  163442 ssh_runner.go:195] Run: openssl version
	I0719 04:58:07.155452  163442 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 04:58:07.155598  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:58:07.165528  163442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:58:07.169339  163442 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:58:07.169447  163442 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:58:07.169488  163442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:58:07.174419  163442 command_runner.go:130] > 3ec20f2e
	I0719 04:58:07.174609  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:58:07.182999  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:58:07.192616  163442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:07.196646  163442 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:07.196711  163442 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:07.196763  163442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:07.201876  163442 command_runner.go:130] > b5213941
	I0719 04:58:07.201920  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:58:07.210330  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:58:07.222545  163442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:58:07.226735  163442 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:58:07.226805  163442 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:58:07.226864  163442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:58:07.231996  163442 command_runner.go:130] > 51391683
	I0719 04:58:07.232212  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:58:07.258560  163442 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:58:07.263354  163442 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:58:07.263373  163442 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 04:58:07.263378  163442 command_runner.go:130] > Device: 253,1	Inode: 5244971     Links: 1
	I0719 04:58:07.263384  163442 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 04:58:07.263392  163442 command_runner.go:130] > Access: 2024-07-19 04:51:24.057666638 +0000
	I0719 04:58:07.263396  163442 command_runner.go:130] > Modify: 2024-07-19 04:51:24.057666638 +0000
	I0719 04:58:07.263401  163442 command_runner.go:130] > Change: 2024-07-19 04:51:24.057666638 +0000
	I0719 04:58:07.263406  163442 command_runner.go:130] >  Birth: 2024-07-19 04:51:24.057666638 +0000
	I0719 04:58:07.263681  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 04:58:07.269136  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.269299  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 04:58:07.274494  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.274667  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 04:58:07.279841  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.279912  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 04:58:07.285055  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.285144  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 04:58:07.290218  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.290286  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 04:58:07.295280  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.295457  163442 kubeadm.go:392] StartCluster: {Name:multinode-270078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-270078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:58:07.295562  163442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 04:58:07.295616  163442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 04:58:07.333440  163442 command_runner.go:130] > 31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee
	I0719 04:58:07.333475  163442 command_runner.go:130] > 9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f
	I0719 04:58:07.333486  163442 command_runner.go:130] > 33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91
	I0719 04:58:07.333493  163442 command_runner.go:130] > 055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1
	I0719 04:58:07.333498  163442 command_runner.go:130] > c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68
	I0719 04:58:07.333503  163442 command_runner.go:130] > 3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884
	I0719 04:58:07.333508  163442 command_runner.go:130] > de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58
	I0719 04:58:07.333521  163442 command_runner.go:130] > 938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401
	I0719 04:58:07.333546  163442 cri.go:89] found id: "31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee"
	I0719 04:58:07.333552  163442 cri.go:89] found id: "9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f"
	I0719 04:58:07.333555  163442 cri.go:89] found id: "33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91"
	I0719 04:58:07.333559  163442 cri.go:89] found id: "055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1"
	I0719 04:58:07.333562  163442 cri.go:89] found id: "c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68"
	I0719 04:58:07.333565  163442 cri.go:89] found id: "3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884"
	I0719 04:58:07.333567  163442 cri.go:89] found id: "de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58"
	I0719 04:58:07.333570  163442 cri.go:89] found id: "938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401"
	I0719 04:58:07.333572  163442 cri.go:89] found id: ""
	I0719 04:58:07.333614  163442 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.619541227Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365194619519960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3fe5add9-c3bd-4913-a2be-b507a5024441 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.620177443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33b667d5-d49f-4b1f-a3c3-ad2ee838e0f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.620247148Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33b667d5-d49f-4b1f-a3c3-ad2ee838e0f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.620596957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a08f4d6107a23f425f2dec6ef176831d08986018050e0d95ed0f59111e620ec0,PodSandboxId:eea26330c190476847869bd5df5688fa75402f135be5679f2e577cad6c59bb3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721365127076313538,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1,PodSandboxId:e795aa419606052b4db6ce5c9974f75a3ee4df0da51c6bcc5acc459af77697ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721365093548896404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f,PodSandboxId:4e1699fef0b6fa028cc27622ac3e5a29c02818074532e304943e03af1abb0c76,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721365093450690176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799e82ef4e09a2f383bbb0370af8a24d51e91a63fec520568d8163efdaffd593,PodSandboxId:e3e47b38505198709a3f0ebb4ca40bb8ad9576d8f009ea4dbcb9f7c80efa2c9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365093437190045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},An
notations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901,PodSandboxId:987538dbe9a7db0662c2e3a2a227fa56e949890a4123b1d1a5e2d44a7c2dc7cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721365093351546500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f,PodSandboxId:756b92b10f2f3048ac7644d6dbd74183a703634a3b532543bb7c24d1ceca7a66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721365089548589249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string{io.kubernetes.container.hash: f3f88bfe,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0,PodSandboxId:22ec679e11aaeaf036fd196eba5a6a50b474275c822fd21941f52119129654c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721365089538141066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595,PodSandboxId:8f00246d9fae6ab2c3653d8b700535d4928bfaae3bf14c376b2e31fa8ae03ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721365089569680290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e,PodSandboxId:83fe0d136542c98291b58ce36d7a74d5324e3ab081f9a2cc17a7d6ac92f341ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721365089496925240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ea4de1fab0615f814f44cc8f79161a4265145329eced45452330fb85e5635,PodSandboxId:7adc5b1bc87cb9a48cb7fe7967594ee97eb92721e3848105da137442a71de253,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721364773214850203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee,PodSandboxId:d4270eaeb3d13634548801f213164841d460d5fccf3351e1df6e5bde36623b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721364718671026127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f,PodSandboxId:47761801c423190c4d9700bf3061bd45960d09fede8214960a9d6d000763b865,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721364718610626868,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},Annotations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91,PodSandboxId:96e37395172bc78efd464da75623fd3bd30c13e531df67fb20edcf628687be43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721364707185513519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.kubernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1,PodSandboxId:061c80aa4b16f32e20cb66b90d67f709b92448b00c18393ad806cb0ee797a78a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721364706558824833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e
-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884,PodSandboxId:8382f89eeff031a4482e2c2b0935c84950b1ea921b08e0862e1977556b6c3050,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721364687064606936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string
{io.kubernetes.container.hash: f3f88bfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68,PodSandboxId:76d4b46d4d1352567b808f0f117460416443a878cc4fd4daa0dff4d8f1718a9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721364687065435651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58,PodSandboxId:e4b9588e49be18a08b3549453da56926edc2ab71feb8e2c513a70cab6e119305,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721364687055477015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io
.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401,PodSandboxId:5b8ae4e732fa0c50432de04176ed2cbb321f601db7e768d33cf2dc344aab35bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721364686885604940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33b667d5-d49f-4b1f-a3c3-ad2ee838e0f4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.660649837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22287620-5b71-48fd-9d18-5c4a2e2afabf name=/runtime.v1.RuntimeService/Version
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.660723288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22287620-5b71-48fd-9d18-5c4a2e2afabf name=/runtime.v1.RuntimeService/Version
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.661992806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=178c3a5c-473e-40d5-a1d3-372a81033de4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.662353550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365194662331971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=178c3a5c-473e-40d5-a1d3-372a81033de4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.663067144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43b5fa2b-6e92-4be9-9432-f8698d205b8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.663119747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43b5fa2b-6e92-4be9-9432-f8698d205b8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.663468674Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a08f4d6107a23f425f2dec6ef176831d08986018050e0d95ed0f59111e620ec0,PodSandboxId:eea26330c190476847869bd5df5688fa75402f135be5679f2e577cad6c59bb3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721365127076313538,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1,PodSandboxId:e795aa419606052b4db6ce5c9974f75a3ee4df0da51c6bcc5acc459af77697ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721365093548896404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f,PodSandboxId:4e1699fef0b6fa028cc27622ac3e5a29c02818074532e304943e03af1abb0c76,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721365093450690176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799e82ef4e09a2f383bbb0370af8a24d51e91a63fec520568d8163efdaffd593,PodSandboxId:e3e47b38505198709a3f0ebb4ca40bb8ad9576d8f009ea4dbcb9f7c80efa2c9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365093437190045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},An
notations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901,PodSandboxId:987538dbe9a7db0662c2e3a2a227fa56e949890a4123b1d1a5e2d44a7c2dc7cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721365093351546500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f,PodSandboxId:756b92b10f2f3048ac7644d6dbd74183a703634a3b532543bb7c24d1ceca7a66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721365089548589249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string{io.kubernetes.container.hash: f3f88bfe,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0,PodSandboxId:22ec679e11aaeaf036fd196eba5a6a50b474275c822fd21941f52119129654c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721365089538141066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595,PodSandboxId:8f00246d9fae6ab2c3653d8b700535d4928bfaae3bf14c376b2e31fa8ae03ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721365089569680290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e,PodSandboxId:83fe0d136542c98291b58ce36d7a74d5324e3ab081f9a2cc17a7d6ac92f341ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721365089496925240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ea4de1fab0615f814f44cc8f79161a4265145329eced45452330fb85e5635,PodSandboxId:7adc5b1bc87cb9a48cb7fe7967594ee97eb92721e3848105da137442a71de253,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721364773214850203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee,PodSandboxId:d4270eaeb3d13634548801f213164841d460d5fccf3351e1df6e5bde36623b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721364718671026127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f,PodSandboxId:47761801c423190c4d9700bf3061bd45960d09fede8214960a9d6d000763b865,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721364718610626868,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},Annotations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91,PodSandboxId:96e37395172bc78efd464da75623fd3bd30c13e531df67fb20edcf628687be43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721364707185513519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.kubernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1,PodSandboxId:061c80aa4b16f32e20cb66b90d67f709b92448b00c18393ad806cb0ee797a78a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721364706558824833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e
-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884,PodSandboxId:8382f89eeff031a4482e2c2b0935c84950b1ea921b08e0862e1977556b6c3050,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721364687064606936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string
{io.kubernetes.container.hash: f3f88bfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68,PodSandboxId:76d4b46d4d1352567b808f0f117460416443a878cc4fd4daa0dff4d8f1718a9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721364687065435651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58,PodSandboxId:e4b9588e49be18a08b3549453da56926edc2ab71feb8e2c513a70cab6e119305,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721364687055477015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io
.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401,PodSandboxId:5b8ae4e732fa0c50432de04176ed2cbb321f601db7e768d33cf2dc344aab35bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721364686885604940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43b5fa2b-6e92-4be9-9432-f8698d205b8f name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.703400294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8af81a04-0684-4ade-abf7-a593f0c14c16 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.703476787Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8af81a04-0684-4ade-abf7-a593f0c14c16 name=/runtime.v1.RuntimeService/Version
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.704480308Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a91f879b-bcc1-4cf6-8a56-ec7a80a3b19c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.705120731Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365194705096275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a91f879b-bcc1-4cf6-8a56-ec7a80a3b19c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.705575925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17000a77-86af-4d83-8e0c-5a45343cd4b0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.705627294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17000a77-86af-4d83-8e0c-5a45343cd4b0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.706216292Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a08f4d6107a23f425f2dec6ef176831d08986018050e0d95ed0f59111e620ec0,PodSandboxId:eea26330c190476847869bd5df5688fa75402f135be5679f2e577cad6c59bb3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721365127076313538,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1,PodSandboxId:e795aa419606052b4db6ce5c9974f75a3ee4df0da51c6bcc5acc459af77697ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721365093548896404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f,PodSandboxId:4e1699fef0b6fa028cc27622ac3e5a29c02818074532e304943e03af1abb0c76,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721365093450690176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799e82ef4e09a2f383bbb0370af8a24d51e91a63fec520568d8163efdaffd593,PodSandboxId:e3e47b38505198709a3f0ebb4ca40bb8ad9576d8f009ea4dbcb9f7c80efa2c9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365093437190045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},An
notations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901,PodSandboxId:987538dbe9a7db0662c2e3a2a227fa56e949890a4123b1d1a5e2d44a7c2dc7cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721365093351546500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f,PodSandboxId:756b92b10f2f3048ac7644d6dbd74183a703634a3b532543bb7c24d1ceca7a66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721365089548589249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string{io.kubernetes.container.hash: f3f88bfe,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0,PodSandboxId:22ec679e11aaeaf036fd196eba5a6a50b474275c822fd21941f52119129654c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721365089538141066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595,PodSandboxId:8f00246d9fae6ab2c3653d8b700535d4928bfaae3bf14c376b2e31fa8ae03ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721365089569680290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e,PodSandboxId:83fe0d136542c98291b58ce36d7a74d5324e3ab081f9a2cc17a7d6ac92f341ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721365089496925240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ea4de1fab0615f814f44cc8f79161a4265145329eced45452330fb85e5635,PodSandboxId:7adc5b1bc87cb9a48cb7fe7967594ee97eb92721e3848105da137442a71de253,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721364773214850203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee,PodSandboxId:d4270eaeb3d13634548801f213164841d460d5fccf3351e1df6e5bde36623b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721364718671026127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f,PodSandboxId:47761801c423190c4d9700bf3061bd45960d09fede8214960a9d6d000763b865,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721364718610626868,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},Annotations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91,PodSandboxId:96e37395172bc78efd464da75623fd3bd30c13e531df67fb20edcf628687be43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721364707185513519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.kubernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1,PodSandboxId:061c80aa4b16f32e20cb66b90d67f709b92448b00c18393ad806cb0ee797a78a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721364706558824833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e
-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884,PodSandboxId:8382f89eeff031a4482e2c2b0935c84950b1ea921b08e0862e1977556b6c3050,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721364687064606936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string
{io.kubernetes.container.hash: f3f88bfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68,PodSandboxId:76d4b46d4d1352567b808f0f117460416443a878cc4fd4daa0dff4d8f1718a9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721364687065435651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58,PodSandboxId:e4b9588e49be18a08b3549453da56926edc2ab71feb8e2c513a70cab6e119305,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721364687055477015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io
.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401,PodSandboxId:5b8ae4e732fa0c50432de04176ed2cbb321f601db7e768d33cf2dc344aab35bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721364686885604940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17000a77-86af-4d83-8e0c-5a45343cd4b0 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.744361510Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=32e1a508-a2d7-4c0a-933e-61ee4b85bf1f name=/runtime.v1.RuntimeService/Version
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.744438313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=32e1a508-a2d7-4c0a-933e-61ee4b85bf1f name=/runtime.v1.RuntimeService/Version
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.745563586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ef27689-133a-4b97-8bd7-9625dc0185aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.746047258Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365194746023207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ef27689-133a-4b97-8bd7-9625dc0185aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.746790865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dfc17435-53b5-4897-b192-2104b8f0cef3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.746944647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dfc17435-53b5-4897-b192-2104b8f0cef3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 04:59:54 multinode-270078 crio[2825]: time="2024-07-19 04:59:54.747429963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a08f4d6107a23f425f2dec6ef176831d08986018050e0d95ed0f59111e620ec0,PodSandboxId:eea26330c190476847869bd5df5688fa75402f135be5679f2e577cad6c59bb3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721365127076313538,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1,PodSandboxId:e795aa419606052b4db6ce5c9974f75a3ee4df0da51c6bcc5acc459af77697ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721365093548896404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f,PodSandboxId:4e1699fef0b6fa028cc27622ac3e5a29c02818074532e304943e03af1abb0c76,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721365093450690176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799e82ef4e09a2f383bbb0370af8a24d51e91a63fec520568d8163efdaffd593,PodSandboxId:e3e47b38505198709a3f0ebb4ca40bb8ad9576d8f009ea4dbcb9f7c80efa2c9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365093437190045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},An
notations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901,PodSandboxId:987538dbe9a7db0662c2e3a2a227fa56e949890a4123b1d1a5e2d44a7c2dc7cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721365093351546500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f,PodSandboxId:756b92b10f2f3048ac7644d6dbd74183a703634a3b532543bb7c24d1ceca7a66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721365089548589249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string{io.kubernetes.container.hash: f3f88bfe,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0,PodSandboxId:22ec679e11aaeaf036fd196eba5a6a50b474275c822fd21941f52119129654c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721365089538141066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595,PodSandboxId:8f00246d9fae6ab2c3653d8b700535d4928bfaae3bf14c376b2e31fa8ae03ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721365089569680290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e,PodSandboxId:83fe0d136542c98291b58ce36d7a74d5324e3ab081f9a2cc17a7d6ac92f341ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721365089496925240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ea4de1fab0615f814f44cc8f79161a4265145329eced45452330fb85e5635,PodSandboxId:7adc5b1bc87cb9a48cb7fe7967594ee97eb92721e3848105da137442a71de253,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721364773214850203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee,PodSandboxId:d4270eaeb3d13634548801f213164841d460d5fccf3351e1df6e5bde36623b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721364718671026127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f,PodSandboxId:47761801c423190c4d9700bf3061bd45960d09fede8214960a9d6d000763b865,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721364718610626868,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},Annotations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91,PodSandboxId:96e37395172bc78efd464da75623fd3bd30c13e531df67fb20edcf628687be43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721364707185513519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.kubernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1,PodSandboxId:061c80aa4b16f32e20cb66b90d67f709b92448b00c18393ad806cb0ee797a78a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721364706558824833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e
-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884,PodSandboxId:8382f89eeff031a4482e2c2b0935c84950b1ea921b08e0862e1977556b6c3050,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721364687064606936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string
{io.kubernetes.container.hash: f3f88bfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68,PodSandboxId:76d4b46d4d1352567b808f0f117460416443a878cc4fd4daa0dff4d8f1718a9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721364687065435651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58,PodSandboxId:e4b9588e49be18a08b3549453da56926edc2ab71feb8e2c513a70cab6e119305,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721364687055477015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io
.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401,PodSandboxId:5b8ae4e732fa0c50432de04176ed2cbb321f601db7e768d33cf2dc344aab35bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721364686885604940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dfc17435-53b5-4897-b192-2104b8f0cef3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a08f4d6107a23       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   eea26330c1904       busybox-fc5497c4f-hnr7x
	9be01f79f8da3       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      About a minute ago   Running             kindnet-cni               1                   e795aa4196060       kindnet-fzrm8
	000677979f1c8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   4e1699fef0b6f       coredns-7db6d8ff4d-vgprr
	799e82ef4e09a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   e3e47b3850519       storage-provisioner
	c9fb3de46f094       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   987538dbe9a7d       kube-proxy-7qj9p
	a38317f7436fa       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   8f00246d9fae6       kube-apiserver-multinode-270078
	133cfe56953f0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   756b92b10f2f3       etcd-multinode-270078
	c4243cd449294       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   22ec679e11aae       kube-scheduler-multinode-270078
	4c87c6e535084       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   83fe0d136542c       kube-controller-manager-multinode-270078
	af4ea4de1fab0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   7adc5b1bc87cb       busybox-fc5497c4f-hnr7x
	31fffbf0c5d39       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   d4270eaeb3d13       coredns-7db6d8ff4d-vgprr
	9479115923e14       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   47761801c4231       storage-provisioner
	33b69ea0ad2f4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   96e37395172bc       kube-proxy-7qj9p
	055cf104d6bcd       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      8 minutes ago        Exited              kindnet-cni               0                   061c80aa4b16f       kindnet-fzrm8
	c4ed35a688d46       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   76d4b46d4d135       kube-controller-manager-multinode-270078
	3a6ddcbf56243       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   8382f89eeff03       etcd-multinode-270078
	de944624d060c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   e4b9588e49be1       kube-apiserver-multinode-270078
	938b8fa47de5b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   5b8ae4e732fa0       kube-scheduler-multinode-270078
	
	
	==> coredns [000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56511 - 26371 "HINFO IN 5389642236483648416.4908493452828383574. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012349334s
	
	
	==> coredns [31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee] <==
	[INFO] 10.244.1.2:56378 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001841418s
	[INFO] 10.244.1.2:42241 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190661s
	[INFO] 10.244.1.2:50886 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074374s
	[INFO] 10.244.1.2:38850 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001348101s
	[INFO] 10.244.1.2:55758 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000067584s
	[INFO] 10.244.1.2:33739 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056266s
	[INFO] 10.244.1.2:58724 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066024s
	[INFO] 10.244.0.3:45855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102308s
	[INFO] 10.244.0.3:50514 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062647s
	[INFO] 10.244.0.3:42290 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073421s
	[INFO] 10.244.0.3:51562 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045836s
	[INFO] 10.244.1.2:38503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093476s
	[INFO] 10.244.1.2:34185 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072618s
	[INFO] 10.244.1.2:37438 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005982s
	[INFO] 10.244.1.2:60714 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049414s
	[INFO] 10.244.0.3:49543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111061s
	[INFO] 10.244.0.3:46617 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000194799s
	[INFO] 10.244.0.3:37021 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127817s
	[INFO] 10.244.0.3:52002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000105369s
	[INFO] 10.244.1.2:38508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157328s
	[INFO] 10.244.1.2:53991 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099448s
	[INFO] 10.244.1.2:56096 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070912s
	[INFO] 10.244.1.2:56089 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072844s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-270078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-270078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-270078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_51_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:51:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-270078
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:59:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:58:12 +0000   Fri, 19 Jul 2024 04:51:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:58:12 +0000   Fri, 19 Jul 2024 04:51:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:58:12 +0000   Fri, 19 Jul 2024 04:51:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:58:12 +0000   Fri, 19 Jul 2024 04:51:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    multinode-270078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ed569a1cbcf4e7c9997772206799d49
	  System UUID:                4ed569a1-cbcf-4e7c-9997-772206799d49
	  Boot ID:                    ad789f78-98f7-47e5-9dc4-82f6628b4d18
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hnr7x                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 coredns-7db6d8ff4d-vgprr                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m10s
	  kube-system                 etcd-multinode-270078                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m23s
	  kube-system                 kindnet-fzrm8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m10s
	  kube-system                 kube-apiserver-multinode-270078             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 kube-controller-manager-multinode-270078    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m25s
	  kube-system                 kube-proxy-7qj9p                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m10s
	  kube-system                 kube-scheduler-multinode-270078             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m23s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m7s                 kube-proxy       
	  Normal  Starting                 101s                 kube-proxy       
	  Normal  Starting                 8m24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     8m23s                kubelet          Node multinode-270078 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  8m23s                kubelet          Node multinode-270078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m23s                kubelet          Node multinode-270078 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m11s                node-controller  Node multinode-270078 event: Registered Node multinode-270078 in Controller
	  Normal  NodeReady                7m57s                kubelet          Node multinode-270078 status is now: NodeReady
	  Normal  Starting                 107s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  106s (x8 over 107s)  kubelet          Node multinode-270078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 107s)  kubelet          Node multinode-270078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x7 over 107s)  kubelet          Node multinode-270078 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s                  node-controller  Node multinode-270078 event: Registered Node multinode-270078 in Controller
	
	
	Name:               multinode-270078-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-270078-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-270078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_58_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:58:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-270078-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:59:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:59:24 +0000   Fri, 19 Jul 2024 04:58:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:59:24 +0000   Fri, 19 Jul 2024 04:58:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:59:24 +0000   Fri, 19 Jul 2024 04:58:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:59:24 +0000   Fri, 19 Jul 2024 04:59:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    multinode-270078-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3de61bf841c647cb861852b49725b4e3
	  System UUID:                3de61bf8-41c6-47cb-8618-52b49725b4e3
	  Boot ID:                    0458ebc9-b9d4-4c03-8f36-f91d3b59ce87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hps86    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 kindnet-ctdvf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m28s
	  kube-system                 kube-proxy-6xrft           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m23s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m29s (x2 over 7m29s)  kubelet     Node multinode-270078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s (x2 over 7m29s)  kubelet     Node multinode-270078-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m29s (x2 over 7m29s)  kubelet     Node multinode-270078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m28s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m8s                   kubelet     Node multinode-270078-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x2 over 62s)      kubelet     Node multinode-270078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x2 over 62s)      kubelet     Node multinode-270078-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x2 over 62s)      kubelet     Node multinode-270078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  62s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-270078-m02 status is now: NodeReady
	
	
	Name:               multinode-270078-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-270078-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-270078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_59_32_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:59:32 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-270078-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:59:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:59:51 +0000   Fri, 19 Jul 2024 04:59:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:59:51 +0000   Fri, 19 Jul 2024 04:59:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:59:51 +0000   Fri, 19 Jul 2024 04:59:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:59:51 +0000   Fri, 19 Jul 2024 04:59:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.126
	  Hostname:    multinode-270078-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d7cc20749b6e4b37aebd9e51c24bef2c
	  System UUID:                d7cc2074-9b6e-4b37-aebd-9e51c24bef2c
	  Boot ID:                    1a13ef49-3305-48e2-b6a2-98c406a1d221
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-88rhc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m35s
	  kube-system                 kube-proxy-t666c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m30s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m41s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m35s (x2 over 6m35s)  kubelet     Node multinode-270078-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x2 over 6m35s)  kubelet     Node multinode-270078-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x2 over 6m35s)  kubelet     Node multinode-270078-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m16s                  kubelet     Node multinode-270078-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet     Node multinode-270078-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet     Node multinode-270078-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet     Node multinode-270078-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m27s                  kubelet     Node multinode-270078-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-270078-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-270078-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-270078-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-270078-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.060006] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058057] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.184476] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.101561] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.244264] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.854866] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +3.504684] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.055979] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.980793] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	[  +0.086386] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.165512] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.897627] systemd-fstab-generator[1447]: Ignoring "noauto" option for root device
	[ +12.615312] kauditd_printk_skb: 60 callbacks suppressed
	[Jul19 04:52] kauditd_printk_skb: 14 callbacks suppressed
	[Jul19 04:58] systemd-fstab-generator[2743]: Ignoring "noauto" option for root device
	[  +0.132436] systemd-fstab-generator[2755]: Ignoring "noauto" option for root device
	[  +0.172438] systemd-fstab-generator[2769]: Ignoring "noauto" option for root device
	[  +0.156709] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.298704] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +0.987039] systemd-fstab-generator[2907]: Ignoring "noauto" option for root device
	[  +1.875137] systemd-fstab-generator[3030]: Ignoring "noauto" option for root device
	[  +4.652277] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.793761] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.454092] systemd-fstab-generator[3849]: Ignoring "noauto" option for root device
	[ +17.478687] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f] <==
	{"level":"info","ts":"2024-07-19T04:58:10.096186Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T04:58:10.096213Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T04:58:10.09645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 switched to configuration voters=(2455236677277094933)"}
	{"level":"info","ts":"2024-07-19T04:58:10.096532Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3ecd98d5111bce24","local-member-id":"2212c0bfe49c9415","added-peer-id":"2212c0bfe49c9415","added-peer-peer-urls":["https://192.168.39.17:2380"]}
	{"level":"info","ts":"2024-07-19T04:58:10.096662Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3ecd98d5111bce24","local-member-id":"2212c0bfe49c9415","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:58:10.096703Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:58:10.106101Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T04:58:10.106298Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2212c0bfe49c9415","initial-advertise-peer-urls":["https://192.168.39.17:2380"],"listen-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.17:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T04:58:10.106341Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T04:58:10.106492Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-07-19T04:58:10.106513Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-07-19T04:58:11.345313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T04:58:11.345365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T04:58:11.345403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 received MsgPreVoteResp from 2212c0bfe49c9415 at term 2"}
	{"level":"info","ts":"2024-07-19T04:58:11.345418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T04:58:11.345424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 received MsgVoteResp from 2212c0bfe49c9415 at term 3"}
	{"level":"info","ts":"2024-07-19T04:58:11.345447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T04:58:11.345459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2212c0bfe49c9415 elected leader 2212c0bfe49c9415 at term 3"}
	{"level":"info","ts":"2024-07-19T04:58:11.350363Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2212c0bfe49c9415","local-member-attributes":"{Name:multinode-270078 ClientURLs:[https://192.168.39.17:2379]}","request-path":"/0/members/2212c0bfe49c9415/attributes","cluster-id":"3ecd98d5111bce24","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T04:58:11.350497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:58:11.350595Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:58:11.351857Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T04:58:11.351909Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T04:58:11.352382Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T04:58:11.353364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.17:2379"}
	
	
	==> etcd [3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884] <==
	{"level":"info","ts":"2024-07-19T04:52:27.048835Z","caller":"traceutil/trace.go:171","msg":"trace[1709312435] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"197.153757ms","start":"2024-07-19T04:52:26.851668Z","end":"2024-07-19T04:52:27.048821Z","steps":["trace[1709312435] 'process raft request'  (duration: 191.608229ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:53:20.894632Z","caller":"traceutil/trace.go:171","msg":"trace[1137693780] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"234.296483ms","start":"2024-07-19T04:53:20.660311Z","end":"2024-07-19T04:53:20.894608Z","steps":["trace[1137693780] 'process raft request'  (duration: 234.209972ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:53:20.901055Z","caller":"traceutil/trace.go:171","msg":"trace[586278936] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"137.391031ms","start":"2024-07-19T04:53:20.763652Z","end":"2024-07-19T04:53:20.901043Z","steps":["trace[586278936] 'read index received'  (duration: 131.243785ms)","trace[586278936] 'applied index is now lower than readState.Index'  (duration: 6.146603ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:53:20.901215Z","caller":"traceutil/trace.go:171","msg":"trace[507960596] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"174.48987ms","start":"2024-07-19T04:53:20.726717Z","end":"2024-07-19T04:53:20.901207Z","steps":["trace[507960596] 'process raft request'  (duration: 174.265438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:53:20.901424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.757845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-270078-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-19T04:53:20.901506Z","caller":"traceutil/trace.go:171","msg":"trace[621944805] range","detail":"{range_begin:/registry/minions/multinode-270078-m03; range_end:; response_count:1; response_revision:573; }","duration":"137.836783ms","start":"2024-07-19T04:53:20.763627Z","end":"2024-07-19T04:53:20.901464Z","steps":["trace[621944805] 'agreement among raft nodes before linearized reading'  (duration: 137.712914ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:53:31.246634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.20408ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10670594086507534256 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.17\" mod_revision:596 > success:<request_put:<key:\"/registry/masterleases/192.168.39.17\" value_size:66 lease:1447222049652758446 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.17\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T04:53:31.24684Z","caller":"traceutil/trace.go:171","msg":"trace[1315389735] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"155.845452ms","start":"2024-07-19T04:53:31.090983Z","end":"2024-07-19T04:53:31.246829Z","steps":["trace[1315389735] 'process raft request'  (duration: 155.753682ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:53:31.247062Z","caller":"traceutil/trace.go:171","msg":"trace[824638523] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"219.028211ms","start":"2024-07-19T04:53:31.028025Z","end":"2024-07-19T04:53:31.247053Z","steps":["trace[824638523] 'process raft request'  (duration: 86.219621ms)","trace[824638523] 'compare'  (duration: 132.106337ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:53:31.440405Z","caller":"traceutil/trace.go:171","msg":"trace[469390373] linearizableReadLoop","detail":"{readStateIndex:670; appliedIndex:669; }","duration":"191.833488ms","start":"2024-07-19T04:53:31.248536Z","end":"2024-07-19T04:53:31.440369Z","steps":["trace[469390373] 'read index received'  (duration: 125.675151ms)","trace[469390373] 'applied index is now lower than readState.Index'  (duration: 66.157587ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:53:31.440517Z","caller":"traceutil/trace.go:171","msg":"trace[86144273] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"233.16083ms","start":"2024-07-19T04:53:31.207344Z","end":"2024-07-19T04:53:31.440505Z","steps":["trace[86144273] 'process raft request'  (duration: 166.925169ms)","trace[86144273] 'compare'  (duration: 65.691605ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T04:53:31.441061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.50841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:421"}
	{"level":"info","ts":"2024-07-19T04:53:31.444317Z","caller":"traceutil/trace.go:171","msg":"trace[636042706] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:626; }","duration":"195.750529ms","start":"2024-07-19T04:53:31.248515Z","end":"2024-07-19T04:53:31.444266Z","steps":["trace[636042706] 'agreement among raft nodes before linearized reading'  (duration: 192.489226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:53:31.780628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.220306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T04:53:31.781261Z","caller":"traceutil/trace.go:171","msg":"trace[1409875192] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:627; }","duration":"234.911295ms","start":"2024-07-19T04:53:31.546334Z","end":"2024-07-19T04:53:31.781245Z","steps":["trace[1409875192] 'range keys from in-memory index tree'  (duration: 234.176498ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:56:33.813585Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T04:56:33.813691Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-270078","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	{"level":"warn","ts":"2024-07-19T04:56:33.813806Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:56:33.813887Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:56:33.889594Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:56:33.889679Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T04:56:33.891335Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2212c0bfe49c9415","current-leader-member-id":"2212c0bfe49c9415"}
	{"level":"info","ts":"2024-07-19T04:56:33.89385Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-07-19T04:56:33.894291Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-07-19T04:56:33.89436Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-270078","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	
	
	==> kernel <==
	 04:59:55 up 8 min,  0 users,  load average: 0.20, 0.21, 0.11
	Linux multinode-270078 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1] <==
	I0719 04:55:47.577956       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	I0719 04:55:57.576630       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:55:57.576689       1 main.go:303] handling current node
	I0719 04:55:57.576708       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:55:57.576715       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:55:57.576913       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:55:57.576943       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	I0719 04:56:07.585783       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:56:07.585911       1 main.go:303] handling current node
	I0719 04:56:07.585940       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:56:07.585958       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:56:07.586082       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:56:07.586104       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	I0719 04:56:17.585481       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:56:17.585524       1 main.go:303] handling current node
	I0719 04:56:17.585537       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:56:17.585556       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:56:17.585705       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:56:17.585726       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	I0719 04:56:27.585835       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:56:27.585878       1 main.go:303] handling current node
	I0719 04:56:27.585892       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:56:27.585897       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:56:27.586008       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:56:27.586027       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1] <==
	I0719 04:59:14.382906       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:59:24.382137       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:59:24.382316       1 main.go:303] handling current node
	I0719 04:59:24.382349       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:59:24.382421       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:59:24.382574       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:59:24.382618       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	I0719 04:59:34.382812       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:59:34.382973       1 main.go:303] handling current node
	I0719 04:59:34.383052       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:59:34.383096       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:59:34.383330       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:59:34.383384       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.2.0/24] 
	I0719 04:59:44.383186       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:59:44.383260       1 main.go:303] handling current node
	I0719 04:59:44.383305       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:59:44.383313       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:59:44.383509       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:59:44.383536       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.2.0/24] 
	I0719 04:59:54.383859       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:59:54.383926       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:59:54.384074       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:59:54.384081       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.2.0/24] 
	I0719 04:59:54.384119       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:59:54.384124       1 main.go:303] handling current node
	
	
	==> kube-apiserver [a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595] <==
	I0719 04:58:12.597474       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 04:58:12.604610       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 04:58:12.605479       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 04:58:12.606473       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 04:58:12.607203       1 aggregator.go:165] initial CRD sync complete...
	I0719 04:58:12.607213       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 04:58:12.607218       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 04:58:12.607223       1 cache.go:39] Caches are synced for autoregister controller
	I0719 04:58:12.607440       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 04:58:12.609943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 04:58:12.610476       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 04:58:12.610499       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0719 04:58:12.613431       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 04:58:12.614072       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 04:58:12.620822       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 04:58:12.620854       1 policy_source.go:224] refreshing policies
	I0719 04:58:12.656479       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 04:58:13.529707       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 04:58:14.360204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 04:58:14.466192       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 04:58:14.479389       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 04:58:14.539005       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 04:58:14.548177       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 04:58:25.066388       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 04:58:25.068073       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58] <==
	W0719 04:56:33.833464       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833494       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833567       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833597       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833623       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833669       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833701       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.840494       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.844241       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.844301       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.844349       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.844381       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.845862       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.845923       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.845971       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846015       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846058       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846102       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846135       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846185       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846235       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846279       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.848118       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.848919       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.849016       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e] <==
	I0719 04:58:25.781879       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 04:58:49.590462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.411425ms"
	I0719 04:58:49.590951       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.527µs"
	I0719 04:58:49.601817       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.909633ms"
	I0719 04:58:49.601938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.026µs"
	I0719 04:58:49.602228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="125.921µs"
	I0719 04:58:50.926997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.057µs"
	I0719 04:58:53.860815       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m02\" does not exist"
	I0719 04:58:53.879304       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m02" podCIDRs=["10.244.1.0/24"]
	I0719 04:58:55.745253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.968µs"
	I0719 04:58:55.787907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.187µs"
	I0719 04:58:55.795545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.137µs"
	I0719 04:58:55.814839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="270.387µs"
	I0719 04:58:55.822279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.927µs"
	I0719 04:58:55.825795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.201µs"
	I0719 04:59:13.212583       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:59:13.231839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.873µs"
	I0719 04:59:13.244901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.375µs"
	I0719 04:59:16.250081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.970812ms"
	I0719 04:59:16.250299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.896µs"
	I0719 04:59:31.064113       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:59:32.489123       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m03\" does not exist"
	I0719 04:59:32.489226       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:59:32.499673       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m03" podCIDRs=["10.244.2.0/24"]
	I0719 04:59:51.977162       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	
	
	==> kube-controller-manager [c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68] <==
	I0719 04:52:27.046037       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m02\" does not exist"
	I0719 04:52:27.060958       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m02" podCIDRs=["10.244.1.0/24"]
	I0719 04:52:29.737457       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-270078-m02"
	I0719 04:52:47.631114       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:52:50.163335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.759648ms"
	I0719 04:52:50.174045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.566045ms"
	I0719 04:52:50.176321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.322µs"
	I0719 04:52:50.176906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.392µs"
	I0719 04:52:53.389146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.634653ms"
	I0719 04:52:53.389229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.183µs"
	I0719 04:52:53.935390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.242966ms"
	I0719 04:52:53.935600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.99µs"
	I0719 04:53:20.902858       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:53:20.904326       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m03\" does not exist"
	I0719 04:53:20.970525       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m03" podCIDRs=["10.244.2.0/24"]
	I0719 04:53:24.756470       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-270078-m03"
	I0719 04:53:39.862160       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:54:08.064042       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:54:09.136417       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m03\" does not exist"
	I0719 04:54:09.138857       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:54:09.154083       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m03" podCIDRs=["10.244.3.0/24"]
	I0719 04:54:28.376714       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:55:09.814512       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:55:09.881601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.133543ms"
	I0719 04:55:09.881830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.714µs"
	
	
	==> kube-proxy [33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91] <==
	I0719 04:51:47.300732       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:51:47.311982       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	I0719 04:51:47.342717       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:51:47.342806       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:51:47.342821       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:51:47.344929       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:51:47.345116       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:51:47.345136       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:51:47.346275       1 config.go:192] "Starting service config controller"
	I0719 04:51:47.346335       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:51:47.346357       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:51:47.346361       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:51:47.346828       1 config.go:319] "Starting node config controller"
	I0719 04:51:47.346848       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:51:47.446515       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:51:47.446562       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:51:47.447401       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901] <==
	I0719 04:58:13.652793       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:58:13.708803       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	I0719 04:58:13.754382       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:58:13.754423       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:58:13.754439       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:58:13.756935       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:58:13.757698       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:58:13.757834       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:58:13.761172       1 config.go:192] "Starting service config controller"
	I0719 04:58:13.761230       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:58:13.761273       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:58:13.761290       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:58:13.761919       1 config.go:319] "Starting node config controller"
	I0719 04:58:13.761977       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:58:13.862318       1 shared_informer.go:320] Caches are synced for node config
	I0719 04:58:13.862868       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:58:13.862937       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401] <==
	E0719 04:51:29.399576       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:29.399673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 04:51:29.399699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:29.399799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:51:29.399827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:51:29.399943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 04:51:29.399965       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 04:51:29.400819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:51:29.401800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:30.304640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 04:51:30.304694       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 04:51:30.308744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:51:30.308828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 04:51:30.441651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 04:51:30.441691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:30.461803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 04:51:30.461871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 04:51:30.563400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:51:30.563475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:30.620123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:51:30.620276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:51:30.859107       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 04:51:30.859361       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 04:51:33.088331       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 04:56:33.823981       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0] <==
	I0719 04:58:10.793343       1 serving.go:380] Generated self-signed cert in-memory
	W0719 04:58:12.560976       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 04:58:12.561125       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 04:58:12.561155       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 04:58:12.561220       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 04:58:12.580911       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 04:58:12.581016       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:58:12.582686       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 04:58:12.582727       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 04:58:12.583087       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 04:58:12.583133       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 04:58:12.684344       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 04:58:09 multinode-270078 kubelet[3037]: E0719 04:58:09.809682    3037 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	Jul 19 04:58:10 multinode-270078 kubelet[3037]: I0719 04:58:10.356333    3037 kubelet_node_status.go:73] "Attempting to register node" node="multinode-270078"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.645570    3037 kubelet_node_status.go:112] "Node was previously registered" node="multinode-270078"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.646233    3037 kubelet_node_status.go:76] "Successfully registered node" node="multinode-270078"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.648899    3037 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.649966    3037 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.831330    3037 apiserver.go:52] "Watching apiserver"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.835132    3037 topology_manager.go:215] "Topology Admit Handler" podUID="1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4" podNamespace="kube-system" podName="kube-proxy-7qj9p"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.835276    3037 topology_manager.go:215] "Topology Admit Handler" podUID="43168421-b0df-4c84-b04a-7d1546c9a743" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vgprr"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.835349    3037 topology_manager.go:215] "Topology Admit Handler" podUID="a11e3057-6b32-41a1-ac4e-7d8d225d7daa" podNamespace="kube-system" podName="kindnet-fzrm8"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.835433    3037 topology_manager.go:215] "Topology Admit Handler" podUID="d6b93c21-dfc9-4700-b89e-075132f74950" podNamespace="kube-system" podName="storage-provisioner"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.835490    3037 topology_manager.go:215] "Topology Admit Handler" podUID="c62f9d80-8985-4a63-88b5-587470389f71" podNamespace="default" podName="busybox-fc5497c4f-hnr7x"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.848532    3037 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854613    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a11e3057-6b32-41a1-ac4e-7d8d225d7daa-lib-modules\") pod \"kindnet-fzrm8\" (UID: \"a11e3057-6b32-41a1-ac4e-7d8d225d7daa\") " pod="kube-system/kindnet-fzrm8"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854661    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4-xtables-lock\") pod \"kube-proxy-7qj9p\" (UID: \"1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4\") " pod="kube-system/kube-proxy-7qj9p"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854729    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a11e3057-6b32-41a1-ac4e-7d8d225d7daa-cni-cfg\") pod \"kindnet-fzrm8\" (UID: \"a11e3057-6b32-41a1-ac4e-7d8d225d7daa\") " pod="kube-system/kindnet-fzrm8"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854776    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a11e3057-6b32-41a1-ac4e-7d8d225d7daa-xtables-lock\") pod \"kindnet-fzrm8\" (UID: \"a11e3057-6b32-41a1-ac4e-7d8d225d7daa\") " pod="kube-system/kindnet-fzrm8"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854794    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d6b93c21-dfc9-4700-b89e-075132f74950-tmp\") pod \"storage-provisioner\" (UID: \"d6b93c21-dfc9-4700-b89e-075132f74950\") " pod="kube-system/storage-provisioner"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854807    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4-lib-modules\") pod \"kube-proxy-7qj9p\" (UID: \"1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4\") " pod="kube-system/kube-proxy-7qj9p"
	Jul 19 04:58:18 multinode-270078 kubelet[3037]: I0719 04:58:18.191114    3037 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 19 04:59:08 multinode-270078 kubelet[3037]: E0719 04:59:08.918537    3037 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:59:08 multinode-270078 kubelet[3037]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:59:08 multinode-270078 kubelet[3037]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:59:08 multinode-270078 kubelet[3037]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:59:08 multinode-270078 kubelet[3037]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 04:59:54.344170  164525 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19302-122995/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-270078 -n multinode-270078
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-270078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (325.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 stop
E0719 05:01:36.835797  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-270078 stop: exit status 82 (2m0.460879892s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-270078-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-270078 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-270078 status: exit status 3 (18.819964133s)

                                                
                                                
-- stdout --
	multinode-270078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-270078-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 05:02:17.569456  165196 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.199:22: connect: no route to host
	E0719 05:02:17.569501  165196 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.199:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-270078 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-270078 -n multinode-270078
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-270078 logs -n 25: (1.379575632s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m02:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078:/home/docker/cp-test_multinode-270078-m02_multinode-270078.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078 sudo cat                                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m02_multinode-270078.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m02:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03:/home/docker/cp-test_multinode-270078-m02_multinode-270078-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078-m03 sudo cat                                   | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m02_multinode-270078-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp testdata/cp-test.txt                                                | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3247087681/001/cp-test_multinode-270078-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078:/home/docker/cp-test_multinode-270078-m03_multinode-270078.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078 sudo cat                                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m03_multinode-270078.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt                       | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02:/home/docker/cp-test_multinode-270078-m03_multinode-270078-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078-m02 sudo cat                                   | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m03_multinode-270078-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-270078 node stop m03                                                          | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	| node    | multinode-270078 node start                                                             | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-270078                                                                | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:54 UTC |                     |
	| stop    | -p multinode-270078                                                                     | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:54 UTC |                     |
	| start   | -p multinode-270078                                                                     | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:56 UTC | 19 Jul 24 04:59 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-270078                                                                | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:59 UTC |                     |
	| node    | multinode-270078 node delete                                                            | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:59 UTC | 19 Jul 24 04:59 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-270078 stop                                                                   | multinode-270078 | jenkins | v1.33.1 | 19 Jul 24 04:59 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:56:32
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:56:32.830279  163442 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:56:32.830618  163442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:56:32.830630  163442 out.go:304] Setting ErrFile to fd 2...
	I0719 04:56:32.830636  163442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:56:32.830928  163442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:56:32.831673  163442 out.go:298] Setting JSON to false
	I0719 04:56:32.832844  163442 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":9536,"bootTime":1721355457,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 04:56:32.832909  163442 start.go:139] virtualization: kvm guest
	I0719 04:56:32.835362  163442 out.go:177] * [multinode-270078] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 04:56:32.836732  163442 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:56:32.836724  163442 notify.go:220] Checking for updates...
	I0719 04:56:32.839026  163442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:56:32.840137  163442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:56:32.841341  163442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:56:32.842563  163442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 04:56:32.843638  163442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:56:32.845208  163442 config.go:182] Loaded profile config "multinode-270078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:56:32.845324  163442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:56:32.845755  163442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:56:32.845810  163442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:56:32.860928  163442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40517
	I0719 04:56:32.861474  163442 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:56:32.862112  163442 main.go:141] libmachine: Using API Version  1
	I0719 04:56:32.862138  163442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:56:32.862556  163442 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:56:32.862780  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:56:32.898880  163442 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 04:56:32.900033  163442 start.go:297] selected driver: kvm2
	I0719 04:56:32.900061  163442 start.go:901] validating driver "kvm2" against &{Name:multinode-270078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-270078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:56:32.900257  163442 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:56:32.900705  163442 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:56:32.900808  163442 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 04:56:32.916182  163442 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 04:56:32.917182  163442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:56:32.917265  163442 cni.go:84] Creating CNI manager for ""
	I0719 04:56:32.917283  163442 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 04:56:32.917387  163442 start.go:340] cluster config:
	{Name:multinode-270078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-270078 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:56:32.917563  163442 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:56:32.920243  163442 out.go:177] * Starting "multinode-270078" primary control-plane node in "multinode-270078" cluster
	I0719 04:56:32.921396  163442 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:56:32.921440  163442 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 04:56:32.921455  163442 cache.go:56] Caching tarball of preloaded images
	I0719 04:56:32.921545  163442 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 04:56:32.921560  163442 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0719 04:56:32.921748  163442 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/config.json ...
	I0719 04:56:32.922004  163442 start.go:360] acquireMachinesLock for multinode-270078: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:56:32.922062  163442 start.go:364] duration metric: took 31.582µs to acquireMachinesLock for "multinode-270078"
	I0719 04:56:32.922080  163442 start.go:96] Skipping create...Using existing machine configuration
	I0719 04:56:32.922086  163442 fix.go:54] fixHost starting: 
	I0719 04:56:32.922523  163442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:56:32.922570  163442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:56:32.938482  163442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37145
	I0719 04:56:32.938916  163442 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:56:32.939451  163442 main.go:141] libmachine: Using API Version  1
	I0719 04:56:32.939471  163442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:56:32.939845  163442 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:56:32.940083  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:56:32.940235  163442 main.go:141] libmachine: (multinode-270078) Calling .GetState
	I0719 04:56:32.941993  163442 fix.go:112] recreateIfNeeded on multinode-270078: state=Running err=<nil>
	W0719 04:56:32.942057  163442 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 04:56:32.944634  163442 out.go:177] * Updating the running kvm2 "multinode-270078" VM ...
	I0719 04:56:32.946117  163442 machine.go:94] provisionDockerMachine start ...
	I0719 04:56:32.946137  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:56:32.946348  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:32.949198  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:32.949771  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:32.949810  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:32.949949  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:32.950123  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:32.950310  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:32.950458  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:32.950642  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:56:32.950830  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:56:32.950843  163442 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:56:33.079669  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-270078
	
	I0719 04:56:33.079714  163442 main.go:141] libmachine: (multinode-270078) Calling .GetMachineName
	I0719 04:56:33.079950  163442 buildroot.go:166] provisioning hostname "multinode-270078"
	I0719 04:56:33.079983  163442 main.go:141] libmachine: (multinode-270078) Calling .GetMachineName
	I0719 04:56:33.080196  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.082932  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.083363  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.083391  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.083587  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:33.083808  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.083950  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.084119  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:33.084310  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:56:33.084488  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:56:33.084504  163442 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-270078 && echo "multinode-270078" | sudo tee /etc/hostname
	I0719 04:56:33.216867  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-270078
	
	I0719 04:56:33.216906  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.219587  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.220017  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.220050  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.220296  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:33.220519  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.220669  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.220813  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:33.220961  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:56:33.221203  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:56:33.221232  163442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-270078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-270078/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-270078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:56:33.333766  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:56:33.333800  163442 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 04:56:33.333828  163442 buildroot.go:174] setting up certificates
	I0719 04:56:33.333837  163442 provision.go:84] configureAuth start
	I0719 04:56:33.333849  163442 main.go:141] libmachine: (multinode-270078) Calling .GetMachineName
	I0719 04:56:33.334119  163442 main.go:141] libmachine: (multinode-270078) Calling .GetIP
	I0719 04:56:33.336703  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.337026  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.337049  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.337249  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.339292  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.339602  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.339632  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.339746  163442 provision.go:143] copyHostCerts
	I0719 04:56:33.339788  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:56:33.339826  163442 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 04:56:33.339845  163442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 04:56:33.339926  163442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 04:56:33.340095  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:56:33.340133  163442 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 04:56:33.340145  163442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 04:56:33.340196  163442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 04:56:33.340268  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:56:33.340291  163442 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 04:56:33.340298  163442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 04:56:33.340335  163442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 04:56:33.340791  163442 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.multinode-270078 san=[127.0.0.1 192.168.39.17 localhost minikube multinode-270078]
	I0719 04:56:33.522240  163442 provision.go:177] copyRemoteCerts
	I0719 04:56:33.522302  163442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:56:33.522328  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.524816  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.525185  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.525219  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.525368  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:33.525593  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.525767  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:33.525911  163442 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078/id_rsa Username:docker}
	I0719 04:56:33.616244  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 04:56:33.616318  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:56:33.642902  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 04:56:33.642982  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 04:56:33.665786  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 04:56:33.665863  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:56:33.687969  163442 provision.go:87] duration metric: took 354.118609ms to configureAuth
	I0719 04:56:33.688000  163442 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:56:33.688288  163442 config.go:182] Loaded profile config "multinode-270078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:56:33.688382  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:56:33.691181  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.691571  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:56:33.691601  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:56:33.691769  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:56:33.691949  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.692087  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:56:33.692244  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:56:33.692384  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:56:33.692552  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:56:33.692567  163442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 04:58:04.390851  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 04:58:04.390888  163442 machine.go:97] duration metric: took 1m31.44475532s to provisionDockerMachine
	I0719 04:58:04.390903  163442 start.go:293] postStartSetup for "multinode-270078" (driver="kvm2")
	I0719 04:58:04.390917  163442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:58:04.390939  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.391386  163442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:58:04.391426  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:58:04.394570  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.395015  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.395046  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.395233  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:58:04.395439  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.395628  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:58:04.395806  163442 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078/id_rsa Username:docker}
	I0719 04:58:04.483611  163442 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:58:04.487141  163442 command_runner.go:130] > NAME=Buildroot
	I0719 04:58:04.487162  163442 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 04:58:04.487167  163442 command_runner.go:130] > ID=buildroot
	I0719 04:58:04.487171  163442 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 04:58:04.487178  163442 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 04:58:04.487252  163442 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:58:04.487279  163442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 04:58:04.487351  163442 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 04:58:04.487424  163442 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 04:58:04.487435  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /etc/ssl/certs/1301702.pem
	I0719 04:58:04.487512  163442 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:58:04.495690  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:58:04.517305  163442 start.go:296] duration metric: took 126.384948ms for postStartSetup
	I0719 04:58:04.517356  163442 fix.go:56] duration metric: took 1m31.59526608s for fixHost
	I0719 04:58:04.517380  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:58:04.520055  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.520384  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.520413  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.520554  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:58:04.520761  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.520926  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.521037  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:58:04.521199  163442 main.go:141] libmachine: Using SSH client type: native
	I0719 04:58:04.521390  163442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0719 04:58:04.521403  163442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:58:04.633523  163442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721365084.609147770
	
	I0719 04:58:04.633554  163442 fix.go:216] guest clock: 1721365084.609147770
	I0719 04:58:04.633564  163442 fix.go:229] Guest: 2024-07-19 04:58:04.60914777 +0000 UTC Remote: 2024-07-19 04:58:04.517360877 +0000 UTC m=+91.724510886 (delta=91.786893ms)
	I0719 04:58:04.633585  163442 fix.go:200] guest clock delta is within tolerance: 91.786893ms
	I0719 04:58:04.633590  163442 start.go:83] releasing machines lock for "multinode-270078", held for 1m31.711518954s
	I0719 04:58:04.633608  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.633859  163442 main.go:141] libmachine: (multinode-270078) Calling .GetIP
	I0719 04:58:04.636442  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.636712  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.636737  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.636895  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.637469  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.637654  163442 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:58:04.637743  163442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 04:58:04.637800  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:58:04.637844  163442 ssh_runner.go:195] Run: cat /version.json
	I0719 04:58:04.637868  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:58:04.640478  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.640715  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.640811  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.640848  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.640987  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:58:04.641148  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:04.641171  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:04.641179  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.641352  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:58:04.641355  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:58:04.641530  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:58:04.641519  163442 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078/id_rsa Username:docker}
	I0719 04:58:04.641651  163442 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:58:04.641754  163442 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078/id_rsa Username:docker}
	I0719 04:58:04.721506  163442 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 04:58:04.722166  163442 ssh_runner.go:195] Run: systemctl --version
	I0719 04:58:04.758589  163442 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0719 04:58:04.758649  163442 command_runner.go:130] > systemd 252 (252)
	I0719 04:58:04.758678  163442 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 04:58:04.758745  163442 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 04:58:04.911513  163442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 04:58:04.919340  163442 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 04:58:04.919537  163442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:58:04.919625  163442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:58:04.928555  163442 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 04:58:04.928576  163442 start.go:495] detecting cgroup driver to use...
	I0719 04:58:04.928635  163442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:58:04.944454  163442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:58:04.958438  163442 docker.go:217] disabling cri-docker service (if available) ...
	I0719 04:58:04.958492  163442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 04:58:04.971279  163442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 04:58:04.984233  163442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 04:58:05.127274  163442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 04:58:05.262713  163442 docker.go:233] disabling docker service ...
	I0719 04:58:05.262797  163442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 04:58:05.282091  163442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 04:58:05.295744  163442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 04:58:05.435396  163442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 04:58:05.600680  163442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 04:58:05.615608  163442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:58:05.635625  163442 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0719 04:58:05.636109  163442 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0719 04:58:05.636168  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.646290  163442 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 04:58:05.646342  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.656166  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.665713  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.675673  163442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:58:05.685556  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.695661  163442 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.707302  163442 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 04:58:05.717117  163442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:58:05.726233  163442 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 04:58:05.726297  163442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:58:05.735329  163442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:58:05.887627  163442 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 04:58:06.441429  163442 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 04:58:06.441506  163442 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 04:58:06.446113  163442 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0719 04:58:06.446138  163442 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 04:58:06.446144  163442 command_runner.go:130] > Device: 0,22	Inode: 1325        Links: 1
	I0719 04:58:06.446151  163442 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 04:58:06.446156  163442 command_runner.go:130] > Access: 2024-07-19 04:58:06.315470181 +0000
	I0719 04:58:06.446162  163442 command_runner.go:130] > Modify: 2024-07-19 04:58:06.315470181 +0000
	I0719 04:58:06.446169  163442 command_runner.go:130] > Change: 2024-07-19 04:58:06.315470181 +0000
	I0719 04:58:06.446174  163442 command_runner.go:130] >  Birth: -
	I0719 04:58:06.446197  163442 start.go:563] Will wait 60s for crictl version
	I0719 04:58:06.446241  163442 ssh_runner.go:195] Run: which crictl
	I0719 04:58:06.449797  163442 command_runner.go:130] > /usr/bin/crictl
	I0719 04:58:06.449853  163442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:58:06.483611  163442 command_runner.go:130] > Version:  0.1.0
	I0719 04:58:06.483639  163442 command_runner.go:130] > RuntimeName:  cri-o
	I0719 04:58:06.483647  163442 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0719 04:58:06.483655  163442 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 04:58:06.484615  163442 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 04:58:06.484698  163442 ssh_runner.go:195] Run: crio --version
	I0719 04:58:06.509895  163442 command_runner.go:130] > crio version 1.29.1
	I0719 04:58:06.509922  163442 command_runner.go:130] > Version:        1.29.1
	I0719 04:58:06.509928  163442 command_runner.go:130] > GitCommit:      unknown
	I0719 04:58:06.509933  163442 command_runner.go:130] > GitCommitDate:  unknown
	I0719 04:58:06.509936  163442 command_runner.go:130] > GitTreeState:   clean
	I0719 04:58:06.509946  163442 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 04:58:06.509950  163442 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 04:58:06.509954  163442 command_runner.go:130] > Compiler:       gc
	I0719 04:58:06.509958  163442 command_runner.go:130] > Platform:       linux/amd64
	I0719 04:58:06.509962  163442 command_runner.go:130] > Linkmode:       dynamic
	I0719 04:58:06.509966  163442 command_runner.go:130] > BuildTags:      
	I0719 04:58:06.509972  163442 command_runner.go:130] >   containers_image_ostree_stub
	I0719 04:58:06.509979  163442 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 04:58:06.509985  163442 command_runner.go:130] >   btrfs_noversion
	I0719 04:58:06.509994  163442 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 04:58:06.510002  163442 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 04:58:06.510009  163442 command_runner.go:130] >   seccomp
	I0719 04:58:06.510015  163442 command_runner.go:130] > LDFlags:          unknown
	I0719 04:58:06.510019  163442 command_runner.go:130] > SeccompEnabled:   true
	I0719 04:58:06.510023  163442 command_runner.go:130] > AppArmorEnabled:  false
	I0719 04:58:06.511273  163442 ssh_runner.go:195] Run: crio --version
	I0719 04:58:06.539108  163442 command_runner.go:130] > crio version 1.29.1
	I0719 04:58:06.539132  163442 command_runner.go:130] > Version:        1.29.1
	I0719 04:58:06.539150  163442 command_runner.go:130] > GitCommit:      unknown
	I0719 04:58:06.539155  163442 command_runner.go:130] > GitCommitDate:  unknown
	I0719 04:58:06.539159  163442 command_runner.go:130] > GitTreeState:   clean
	I0719 04:58:06.539164  163442 command_runner.go:130] > BuildDate:      2024-07-18T22:57:15Z
	I0719 04:58:06.539171  163442 command_runner.go:130] > GoVersion:      go1.21.6
	I0719 04:58:06.539175  163442 command_runner.go:130] > Compiler:       gc
	I0719 04:58:06.539181  163442 command_runner.go:130] > Platform:       linux/amd64
	I0719 04:58:06.539185  163442 command_runner.go:130] > Linkmode:       dynamic
	I0719 04:58:06.539189  163442 command_runner.go:130] > BuildTags:      
	I0719 04:58:06.539193  163442 command_runner.go:130] >   containers_image_ostree_stub
	I0719 04:58:06.539197  163442 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0719 04:58:06.539202  163442 command_runner.go:130] >   btrfs_noversion
	I0719 04:58:06.539208  163442 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0719 04:58:06.539212  163442 command_runner.go:130] >   libdm_no_deferred_remove
	I0719 04:58:06.539220  163442 command_runner.go:130] >   seccomp
	I0719 04:58:06.539224  163442 command_runner.go:130] > LDFlags:          unknown
	I0719 04:58:06.539228  163442 command_runner.go:130] > SeccompEnabled:   true
	I0719 04:58:06.539234  163442 command_runner.go:130] > AppArmorEnabled:  false
	I0719 04:58:06.541253  163442 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0719 04:58:06.542602  163442 main.go:141] libmachine: (multinode-270078) Calling .GetIP
	I0719 04:58:06.545049  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:06.545387  163442 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:58:06.545413  163442 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:58:06.545570  163442 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 04:58:06.549395  163442 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0719 04:58:06.549647  163442 kubeadm.go:883] updating cluster {Name:multinode-270078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-270078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:58:06.549789  163442 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 04:58:06.549849  163442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:58:06.589767  163442 command_runner.go:130] > {
	I0719 04:58:06.589795  163442 command_runner.go:130] >   "images": [
	I0719 04:58:06.589801  163442 command_runner.go:130] >     {
	I0719 04:58:06.589813  163442 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 04:58:06.589820  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.589828  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 04:58:06.589834  163442 command_runner.go:130] >       ],
	I0719 04:58:06.589840  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.589853  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 04:58:06.589863  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 04:58:06.589873  163442 command_runner.go:130] >       ],
	I0719 04:58:06.589881  163442 command_runner.go:130] >       "size": "87165492",
	I0719 04:58:06.589890  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.589896  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.589912  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.589917  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.589921  163442 command_runner.go:130] >     },
	I0719 04:58:06.589924  163442 command_runner.go:130] >     {
	I0719 04:58:06.589930  163442 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 04:58:06.589935  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.589941  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 04:58:06.589946  163442 command_runner.go:130] >       ],
	I0719 04:58:06.589951  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.589958  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 04:58:06.589967  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 04:58:06.589973  163442 command_runner.go:130] >       ],
	I0719 04:58:06.589981  163442 command_runner.go:130] >       "size": "1363676",
	I0719 04:58:06.589988  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.590001  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590007  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590014  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590017  163442 command_runner.go:130] >     },
	I0719 04:58:06.590021  163442 command_runner.go:130] >     {
	I0719 04:58:06.590027  163442 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 04:58:06.590032  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590036  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 04:58:06.590040  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590045  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590052  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 04:58:06.590065  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 04:58:06.590072  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590079  163442 command_runner.go:130] >       "size": "31470524",
	I0719 04:58:06.590085  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.590092  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590098  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590108  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590114  163442 command_runner.go:130] >     },
	I0719 04:58:06.590122  163442 command_runner.go:130] >     {
	I0719 04:58:06.590132  163442 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 04:58:06.590140  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590145  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 04:58:06.590151  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590155  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590168  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 04:58:06.590188  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 04:58:06.590197  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590206  163442 command_runner.go:130] >       "size": "61245718",
	I0719 04:58:06.590217  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.590227  163442 command_runner.go:130] >       "username": "nonroot",
	I0719 04:58:06.590236  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590244  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590250  163442 command_runner.go:130] >     },
	I0719 04:58:06.590255  163442 command_runner.go:130] >     {
	I0719 04:58:06.590264  163442 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 04:58:06.590275  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590282  163442 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 04:58:06.590291  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590300  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590314  163442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 04:58:06.590327  163442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 04:58:06.590341  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590348  163442 command_runner.go:130] >       "size": "150779692",
	I0719 04:58:06.590354  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.590363  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.590372  163442 command_runner.go:130] >       },
	I0719 04:58:06.590379  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590389  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590399  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590407  163442 command_runner.go:130] >     },
	I0719 04:58:06.590415  163442 command_runner.go:130] >     {
	I0719 04:58:06.590426  163442 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 04:58:06.590436  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590444  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 04:58:06.590447  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590456  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590471  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 04:58:06.590486  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 04:58:06.590494  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590505  163442 command_runner.go:130] >       "size": "117609954",
	I0719 04:58:06.590514  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.590523  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.590531  163442 command_runner.go:130] >       },
	I0719 04:58:06.590538  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590543  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590551  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590561  163442 command_runner.go:130] >     },
	I0719 04:58:06.590566  163442 command_runner.go:130] >     {
	I0719 04:58:06.590579  163442 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 04:58:06.590589  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590601  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 04:58:06.590609  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590618  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590631  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 04:58:06.590644  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 04:58:06.590652  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590660  163442 command_runner.go:130] >       "size": "112198984",
	I0719 04:58:06.590669  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.590679  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.590687  163442 command_runner.go:130] >       },
	I0719 04:58:06.590694  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590703  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590712  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590718  163442 command_runner.go:130] >     },
	I0719 04:58:06.590722  163442 command_runner.go:130] >     {
	I0719 04:58:06.590734  163442 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 04:58:06.590744  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590752  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 04:58:06.590760  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590767  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590791  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 04:58:06.590806  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 04:58:06.590811  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590815  163442 command_runner.go:130] >       "size": "85953945",
	I0719 04:58:06.590818  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.590825  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590831  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590837  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590842  163442 command_runner.go:130] >     },
	I0719 04:58:06.590848  163442 command_runner.go:130] >     {
	I0719 04:58:06.590859  163442 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 04:58:06.590864  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.590872  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 04:58:06.590878  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590884  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.590896  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 04:58:06.590903  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 04:58:06.590908  163442 command_runner.go:130] >       ],
	I0719 04:58:06.590914  163442 command_runner.go:130] >       "size": "63051080",
	I0719 04:58:06.590924  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.590930  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.590939  163442 command_runner.go:130] >       },
	I0719 04:58:06.590947  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.590955  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.590964  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.590972  163442 command_runner.go:130] >     },
	I0719 04:58:06.590978  163442 command_runner.go:130] >     {
	I0719 04:58:06.590991  163442 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 04:58:06.590997  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.591002  163442 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 04:58:06.591011  163442 command_runner.go:130] >       ],
	I0719 04:58:06.591017  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.591031  163442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 04:58:06.591046  163442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 04:58:06.591055  163442 command_runner.go:130] >       ],
	I0719 04:58:06.591062  163442 command_runner.go:130] >       "size": "750414",
	I0719 04:58:06.591070  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.591077  163442 command_runner.go:130] >         "value": "65535"
	I0719 04:58:06.591084  163442 command_runner.go:130] >       },
	I0719 04:58:06.591089  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.591097  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.591107  163442 command_runner.go:130] >       "pinned": true
	I0719 04:58:06.591112  163442 command_runner.go:130] >     }
	I0719 04:58:06.591121  163442 command_runner.go:130] >   ]
	I0719 04:58:06.591129  163442 command_runner.go:130] > }
	I0719 04:58:06.591347  163442 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:58:06.591360  163442 crio.go:433] Images already preloaded, skipping extraction
	I0719 04:58:06.591419  163442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 04:58:06.622569  163442 command_runner.go:130] > {
	I0719 04:58:06.622597  163442 command_runner.go:130] >   "images": [
	I0719 04:58:06.622603  163442 command_runner.go:130] >     {
	I0719 04:58:06.622614  163442 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0719 04:58:06.622620  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.622628  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0719 04:58:06.622633  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622638  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.622650  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0719 04:58:06.622660  163442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0719 04:58:06.622666  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622677  163442 command_runner.go:130] >       "size": "87165492",
	I0719 04:58:06.622687  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.622696  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.622709  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.622716  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.622722  163442 command_runner.go:130] >     },
	I0719 04:58:06.622729  163442 command_runner.go:130] >     {
	I0719 04:58:06.622741  163442 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0719 04:58:06.622751  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.622765  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0719 04:58:06.622773  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622780  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.622796  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0719 04:58:06.622810  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0719 04:58:06.622819  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622827  163442 command_runner.go:130] >       "size": "1363676",
	I0719 04:58:06.622836  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.622846  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.622855  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.622864  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.622872  163442 command_runner.go:130] >     },
	I0719 04:58:06.622878  163442 command_runner.go:130] >     {
	I0719 04:58:06.622893  163442 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0719 04:58:06.622903  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.622913  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0719 04:58:06.622921  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622928  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.622944  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0719 04:58:06.622960  163442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0719 04:58:06.622969  163442 command_runner.go:130] >       ],
	I0719 04:58:06.622977  163442 command_runner.go:130] >       "size": "31470524",
	I0719 04:58:06.622986  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.622995  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623005  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623015  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623022  163442 command_runner.go:130] >     },
	I0719 04:58:06.623029  163442 command_runner.go:130] >     {
	I0719 04:58:06.623041  163442 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0719 04:58:06.623049  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623059  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0719 04:58:06.623068  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623075  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623088  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0719 04:58:06.623105  163442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0719 04:58:06.623113  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623119  163442 command_runner.go:130] >       "size": "61245718",
	I0719 04:58:06.623125  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.623133  163442 command_runner.go:130] >       "username": "nonroot",
	I0719 04:58:06.623143  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623151  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623159  163442 command_runner.go:130] >     },
	I0719 04:58:06.623165  163442 command_runner.go:130] >     {
	I0719 04:58:06.623178  163442 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0719 04:58:06.623188  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623197  163442 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0719 04:58:06.623205  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623213  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623227  163442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0719 04:58:06.623244  163442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0719 04:58:06.623253  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623261  163442 command_runner.go:130] >       "size": "150779692",
	I0719 04:58:06.623270  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.623279  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.623287  163442 command_runner.go:130] >       },
	I0719 04:58:06.623295  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623304  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623314  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623322  163442 command_runner.go:130] >     },
	I0719 04:58:06.623338  163442 command_runner.go:130] >     {
	I0719 04:58:06.623350  163442 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0719 04:58:06.623359  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623370  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0719 04:58:06.623378  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623385  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623400  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0719 04:58:06.623417  163442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0719 04:58:06.623426  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623432  163442 command_runner.go:130] >       "size": "117609954",
	I0719 04:58:06.623439  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.623448  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.623454  163442 command_runner.go:130] >       },
	I0719 04:58:06.623464  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623473  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623481  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623489  163442 command_runner.go:130] >     },
	I0719 04:58:06.623495  163442 command_runner.go:130] >     {
	I0719 04:58:06.623508  163442 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0719 04:58:06.623518  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623529  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0719 04:58:06.623536  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623544  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623560  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0719 04:58:06.623574  163442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0719 04:58:06.623581  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623592  163442 command_runner.go:130] >       "size": "112198984",
	I0719 04:58:06.623601  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.623608  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.623616  163442 command_runner.go:130] >       },
	I0719 04:58:06.623624  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623633  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623642  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623650  163442 command_runner.go:130] >     },
	I0719 04:58:06.623656  163442 command_runner.go:130] >     {
	I0719 04:58:06.623669  163442 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0719 04:58:06.623679  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623691  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0719 04:58:06.623698  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623706  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623729  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0719 04:58:06.623744  163442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0719 04:58:06.623752  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623759  163442 command_runner.go:130] >       "size": "85953945",
	I0719 04:58:06.623768  163442 command_runner.go:130] >       "uid": null,
	I0719 04:58:06.623777  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623784  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623791  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623797  163442 command_runner.go:130] >     },
	I0719 04:58:06.623805  163442 command_runner.go:130] >     {
	I0719 04:58:06.623816  163442 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0719 04:58:06.623826  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623836  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0719 04:58:06.623845  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623852  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.623868  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0719 04:58:06.623883  163442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0719 04:58:06.623891  163442 command_runner.go:130] >       ],
	I0719 04:58:06.623897  163442 command_runner.go:130] >       "size": "63051080",
	I0719 04:58:06.623906  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.623913  163442 command_runner.go:130] >         "value": "0"
	I0719 04:58:06.623921  163442 command_runner.go:130] >       },
	I0719 04:58:06.623930  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.623940  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.623950  163442 command_runner.go:130] >       "pinned": false
	I0719 04:58:06.623957  163442 command_runner.go:130] >     },
	I0719 04:58:06.623964  163442 command_runner.go:130] >     {
	I0719 04:58:06.623977  163442 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0719 04:58:06.623985  163442 command_runner.go:130] >       "repoTags": [
	I0719 04:58:06.623993  163442 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0719 04:58:06.624001  163442 command_runner.go:130] >       ],
	I0719 04:58:06.624009  163442 command_runner.go:130] >       "repoDigests": [
	I0719 04:58:06.624024  163442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0719 04:58:06.624040  163442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0719 04:58:06.624049  163442 command_runner.go:130] >       ],
	I0719 04:58:06.624057  163442 command_runner.go:130] >       "size": "750414",
	I0719 04:58:06.624066  163442 command_runner.go:130] >       "uid": {
	I0719 04:58:06.624074  163442 command_runner.go:130] >         "value": "65535"
	I0719 04:58:06.624081  163442 command_runner.go:130] >       },
	I0719 04:58:06.624088  163442 command_runner.go:130] >       "username": "",
	I0719 04:58:06.624097  163442 command_runner.go:130] >       "spec": null,
	I0719 04:58:06.624105  163442 command_runner.go:130] >       "pinned": true
	I0719 04:58:06.624112  163442 command_runner.go:130] >     }
	I0719 04:58:06.624118  163442 command_runner.go:130] >   ]
	I0719 04:58:06.624125  163442 command_runner.go:130] > }
	I0719 04:58:06.624245  163442 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 04:58:06.624259  163442 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:58:06.624269  163442 kubeadm.go:934] updating node { 192.168.39.17 8443 v1.30.3 crio true true} ...
	I0719 04:58:06.624398  163442 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-270078 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-270078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:58:06.624486  163442 ssh_runner.go:195] Run: crio config
	I0719 04:58:06.662881  163442 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0719 04:58:06.662909  163442 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0719 04:58:06.662915  163442 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0719 04:58:06.662919  163442 command_runner.go:130] > #
	I0719 04:58:06.662927  163442 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0719 04:58:06.662936  163442 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0719 04:58:06.662946  163442 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0719 04:58:06.662956  163442 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0719 04:58:06.662961  163442 command_runner.go:130] > # reload'.
	I0719 04:58:06.662970  163442 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0719 04:58:06.662978  163442 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0719 04:58:06.662986  163442 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0719 04:58:06.663002  163442 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0719 04:58:06.663010  163442 command_runner.go:130] > [crio]
	I0719 04:58:06.663020  163442 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0719 04:58:06.663050  163442 command_runner.go:130] > # containers images, in this directory.
	I0719 04:58:06.663064  163442 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0719 04:58:06.663079  163442 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0719 04:58:06.663145  163442 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0719 04:58:06.663166  163442 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0719 04:58:06.663298  163442 command_runner.go:130] > # imagestore = ""
	I0719 04:58:06.663319  163442 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0719 04:58:06.663329  163442 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0719 04:58:06.663414  163442 command_runner.go:130] > storage_driver = "overlay"
	I0719 04:58:06.663428  163442 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0719 04:58:06.663439  163442 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0719 04:58:06.663448  163442 command_runner.go:130] > storage_option = [
	I0719 04:58:06.663585  163442 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0719 04:58:06.663597  163442 command_runner.go:130] > ]
	I0719 04:58:06.663609  163442 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0719 04:58:06.663624  163442 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0719 04:58:06.663907  163442 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0719 04:58:06.663924  163442 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0719 04:58:06.663930  163442 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0719 04:58:06.663935  163442 command_runner.go:130] > # always happen on a node reboot
	I0719 04:58:06.664131  163442 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0719 04:58:06.664151  163442 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0719 04:58:06.664160  163442 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0719 04:58:06.664171  163442 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0719 04:58:06.664277  163442 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0719 04:58:06.664299  163442 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0719 04:58:06.664312  163442 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0719 04:58:06.664510  163442 command_runner.go:130] > # internal_wipe = true
	I0719 04:58:06.664530  163442 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0719 04:58:06.664538  163442 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0719 04:58:06.664742  163442 command_runner.go:130] > # internal_repair = false
	I0719 04:58:06.664753  163442 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0719 04:58:06.664759  163442 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0719 04:58:06.664764  163442 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0719 04:58:06.665002  163442 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0719 04:58:06.665012  163442 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0719 04:58:06.665016  163442 command_runner.go:130] > [crio.api]
	I0719 04:58:06.665021  163442 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0719 04:58:06.665303  163442 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0719 04:58:06.665323  163442 command_runner.go:130] > # IP address on which the stream server will listen.
	I0719 04:58:06.665397  163442 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0719 04:58:06.665420  163442 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0719 04:58:06.665430  163442 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0719 04:58:06.665641  163442 command_runner.go:130] > # stream_port = "0"
	I0719 04:58:06.665658  163442 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0719 04:58:06.665932  163442 command_runner.go:130] > # stream_enable_tls = false
	I0719 04:58:06.665949  163442 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0719 04:58:06.666348  163442 command_runner.go:130] > # stream_idle_timeout = ""
	I0719 04:58:06.666368  163442 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0719 04:58:06.666382  163442 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0719 04:58:06.666391  163442 command_runner.go:130] > # minutes.
	I0719 04:58:06.666398  163442 command_runner.go:130] > # stream_tls_cert = ""
	I0719 04:58:06.666409  163442 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0719 04:58:06.666419  163442 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0719 04:58:06.666467  163442 command_runner.go:130] > # stream_tls_key = ""
	I0719 04:58:06.666488  163442 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0719 04:58:06.666499  163442 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0719 04:58:06.666520  163442 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0719 04:58:06.666530  163442 command_runner.go:130] > # stream_tls_ca = ""
	I0719 04:58:06.666538  163442 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 04:58:06.666546  163442 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0719 04:58:06.666553  163442 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0719 04:58:06.666560  163442 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0719 04:58:06.666566  163442 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0719 04:58:06.666575  163442 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0719 04:58:06.666584  163442 command_runner.go:130] > [crio.runtime]
	I0719 04:58:06.666596  163442 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0719 04:58:06.666606  163442 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0719 04:58:06.666616  163442 command_runner.go:130] > # "nofile=1024:2048"
	I0719 04:58:06.666625  163442 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0719 04:58:06.666636  163442 command_runner.go:130] > # default_ulimits = [
	I0719 04:58:06.666644  163442 command_runner.go:130] > # ]
	I0719 04:58:06.666653  163442 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0719 04:58:06.666905  163442 command_runner.go:130] > # no_pivot = false
	I0719 04:58:06.666915  163442 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0719 04:58:06.666921  163442 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0719 04:58:06.667135  163442 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0719 04:58:06.667149  163442 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0719 04:58:06.667154  163442 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0719 04:58:06.667163  163442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 04:58:06.667273  163442 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0719 04:58:06.667289  163442 command_runner.go:130] > # Cgroup setting for conmon
	I0719 04:58:06.667300  163442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0719 04:58:06.667426  163442 command_runner.go:130] > conmon_cgroup = "pod"
	I0719 04:58:06.667445  163442 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0719 04:58:06.667454  163442 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0719 04:58:06.667467  163442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0719 04:58:06.667476  163442 command_runner.go:130] > conmon_env = [
	I0719 04:58:06.667520  163442 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 04:58:06.667547  163442 command_runner.go:130] > ]
	I0719 04:58:06.667561  163442 command_runner.go:130] > # Additional environment variables to set for all the
	I0719 04:58:06.667569  163442 command_runner.go:130] > # containers. These are overridden if set in the
	I0719 04:58:06.667582  163442 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0719 04:58:06.667685  163442 command_runner.go:130] > # default_env = [
	I0719 04:58:06.667869  163442 command_runner.go:130] > # ]
	I0719 04:58:06.667889  163442 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0719 04:58:06.667902  163442 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0719 04:58:06.668062  163442 command_runner.go:130] > # selinux = false
	I0719 04:58:06.668081  163442 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0719 04:58:06.668090  163442 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0719 04:58:06.668099  163442 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0719 04:58:06.668192  163442 command_runner.go:130] > # seccomp_profile = ""
	I0719 04:58:06.668205  163442 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0719 04:58:06.668238  163442 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0719 04:58:06.668255  163442 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0719 04:58:06.668263  163442 command_runner.go:130] > # which might increase security.
	I0719 04:58:06.668271  163442 command_runner.go:130] > # This option is currently deprecated,
	I0719 04:58:06.668280  163442 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0719 04:58:06.668328  163442 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0719 04:58:06.668345  163442 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0719 04:58:06.668355  163442 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0719 04:58:06.668369  163442 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0719 04:58:06.668383  163442 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0719 04:58:06.668392  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.668536  163442 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0719 04:58:06.668546  163442 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0719 04:58:06.668550  163442 command_runner.go:130] > # the cgroup blockio controller.
	I0719 04:58:06.668684  163442 command_runner.go:130] > # blockio_config_file = ""
	I0719 04:58:06.668700  163442 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0719 04:58:06.668707  163442 command_runner.go:130] > # blockio parameters.
	I0719 04:58:06.668955  163442 command_runner.go:130] > # blockio_reload = false
	I0719 04:58:06.668974  163442 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0719 04:58:06.668980  163442 command_runner.go:130] > # irqbalance daemon.
	I0719 04:58:06.669201  163442 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0719 04:58:06.669217  163442 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0719 04:58:06.669227  163442 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0719 04:58:06.669238  163442 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0719 04:58:06.669427  163442 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0719 04:58:06.669447  163442 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0719 04:58:06.669456  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.669654  163442 command_runner.go:130] > # rdt_config_file = ""
	I0719 04:58:06.669673  163442 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0719 04:58:06.669747  163442 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0719 04:58:06.669787  163442 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0719 04:58:06.669981  163442 command_runner.go:130] > # separate_pull_cgroup = ""
	I0719 04:58:06.669997  163442 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0719 04:58:06.670009  163442 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0719 04:58:06.670016  163442 command_runner.go:130] > # will be added.
	I0719 04:58:06.670100  163442 command_runner.go:130] > # default_capabilities = [
	I0719 04:58:06.670241  163442 command_runner.go:130] > # 	"CHOWN",
	I0719 04:58:06.670363  163442 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0719 04:58:06.670475  163442 command_runner.go:130] > # 	"FSETID",
	I0719 04:58:06.670594  163442 command_runner.go:130] > # 	"FOWNER",
	I0719 04:58:06.670715  163442 command_runner.go:130] > # 	"SETGID",
	I0719 04:58:06.670882  163442 command_runner.go:130] > # 	"SETUID",
	I0719 04:58:06.670990  163442 command_runner.go:130] > # 	"SETPCAP",
	I0719 04:58:06.671106  163442 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0719 04:58:06.671223  163442 command_runner.go:130] > # 	"KILL",
	I0719 04:58:06.671332  163442 command_runner.go:130] > # ]
	I0719 04:58:06.671348  163442 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0719 04:58:06.671364  163442 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0719 04:58:06.671553  163442 command_runner.go:130] > # add_inheritable_capabilities = false
	I0719 04:58:06.671567  163442 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0719 04:58:06.671576  163442 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 04:58:06.671583  163442 command_runner.go:130] > default_sysctls = [
	I0719 04:58:06.671633  163442 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0719 04:58:06.671670  163442 command_runner.go:130] > ]
	I0719 04:58:06.671681  163442 command_runner.go:130] > # List of devices on the host that a
	I0719 04:58:06.671691  163442 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0719 04:58:06.671797  163442 command_runner.go:130] > # allowed_devices = [
	I0719 04:58:06.671969  163442 command_runner.go:130] > # 	"/dev/fuse",
	I0719 04:58:06.671980  163442 command_runner.go:130] > # ]
	I0719 04:58:06.671989  163442 command_runner.go:130] > # List of additional devices. specified as
	I0719 04:58:06.671999  163442 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0719 04:58:06.672010  163442 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0719 04:58:06.672021  163442 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0719 04:58:06.672030  163442 command_runner.go:130] > # additional_devices = [
	I0719 04:58:06.672037  163442 command_runner.go:130] > # ]
	I0719 04:58:06.672049  163442 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0719 04:58:06.672057  163442 command_runner.go:130] > # cdi_spec_dirs = [
	I0719 04:58:06.672067  163442 command_runner.go:130] > # 	"/etc/cdi",
	I0719 04:58:06.672073  163442 command_runner.go:130] > # 	"/var/run/cdi",
	I0719 04:58:06.672082  163442 command_runner.go:130] > # ]
	I0719 04:58:06.672092  163442 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0719 04:58:06.672101  163442 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0719 04:58:06.672106  163442 command_runner.go:130] > # Defaults to false.
	I0719 04:58:06.672117  163442 command_runner.go:130] > # device_ownership_from_security_context = false
	I0719 04:58:06.672129  163442 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0719 04:58:06.672141  163442 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0719 04:58:06.672151  163442 command_runner.go:130] > # hooks_dir = [
	I0719 04:58:06.672161  163442 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0719 04:58:06.672168  163442 command_runner.go:130] > # ]
	I0719 04:58:06.672177  163442 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0719 04:58:06.672187  163442 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0719 04:58:06.672193  163442 command_runner.go:130] > # its default mounts from the following two files:
	I0719 04:58:06.672200  163442 command_runner.go:130] > #
	I0719 04:58:06.672210  163442 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0719 04:58:06.672224  163442 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0719 04:58:06.672232  163442 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0719 04:58:06.672240  163442 command_runner.go:130] > #
	I0719 04:58:06.672249  163442 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0719 04:58:06.672263  163442 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0719 04:58:06.672276  163442 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0719 04:58:06.672287  163442 command_runner.go:130] > #      only add mounts it finds in this file.
	I0719 04:58:06.672295  163442 command_runner.go:130] > #
	I0719 04:58:06.672303  163442 command_runner.go:130] > # default_mounts_file = ""
	I0719 04:58:06.672314  163442 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0719 04:58:06.672329  163442 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0719 04:58:06.672342  163442 command_runner.go:130] > pids_limit = 1024
	I0719 04:58:06.672355  163442 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0719 04:58:06.672368  163442 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0719 04:58:06.672381  163442 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0719 04:58:06.672392  163442 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0719 04:58:06.672404  163442 command_runner.go:130] > # log_size_max = -1
	I0719 04:58:06.672416  163442 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0719 04:58:06.672529  163442 command_runner.go:130] > # log_to_journald = false
	I0719 04:58:06.672549  163442 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0719 04:58:06.672560  163442 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0719 04:58:06.672570  163442 command_runner.go:130] > # Path to directory for container attach sockets.
	I0719 04:58:06.672580  163442 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0719 04:58:06.672589  163442 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0719 04:58:06.672596  163442 command_runner.go:130] > # bind_mount_prefix = ""
	I0719 04:58:06.672606  163442 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0719 04:58:06.672616  163442 command_runner.go:130] > # read_only = false
	I0719 04:58:06.672627  163442 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0719 04:58:06.672642  163442 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0719 04:58:06.672652  163442 command_runner.go:130] > # live configuration reload.
	I0719 04:58:06.672659  163442 command_runner.go:130] > # log_level = "info"
	I0719 04:58:06.672672  163442 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0719 04:58:06.672683  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.672693  163442 command_runner.go:130] > # log_filter = ""
	I0719 04:58:06.672703  163442 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0719 04:58:06.672717  163442 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0719 04:58:06.672723  163442 command_runner.go:130] > # separated by comma.
	I0719 04:58:06.672733  163442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 04:58:06.672742  163442 command_runner.go:130] > # uid_mappings = ""
	I0719 04:58:06.672752  163442 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0719 04:58:06.672821  163442 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0719 04:58:06.672839  163442 command_runner.go:130] > # separated by comma.
	I0719 04:58:06.672852  163442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 04:58:06.672866  163442 command_runner.go:130] > # gid_mappings = ""
	I0719 04:58:06.672903  163442 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0719 04:58:06.672935  163442 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 04:58:06.672947  163442 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 04:58:06.672963  163442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 04:58:06.672972  163442 command_runner.go:130] > # minimum_mappable_uid = -1
	I0719 04:58:06.672983  163442 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0719 04:58:06.672995  163442 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0719 04:58:06.673008  163442 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0719 04:58:06.673032  163442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0719 04:58:06.673045  163442 command_runner.go:130] > # minimum_mappable_gid = -1
	I0719 04:58:06.673057  163442 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0719 04:58:06.673084  163442 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0719 04:58:06.673096  163442 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0719 04:58:06.673106  163442 command_runner.go:130] > # ctr_stop_timeout = 30
	I0719 04:58:06.673115  163442 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0719 04:58:06.673128  163442 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0719 04:58:06.673138  163442 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0719 04:58:06.673148  163442 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0719 04:58:06.673155  163442 command_runner.go:130] > drop_infra_ctr = false
	I0719 04:58:06.673168  163442 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0719 04:58:06.673183  163442 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0719 04:58:06.673197  163442 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0719 04:58:06.673210  163442 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0719 04:58:06.673224  163442 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0719 04:58:06.673236  163442 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0719 04:58:06.673249  163442 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0719 04:58:06.673260  163442 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0719 04:58:06.673269  163442 command_runner.go:130] > # shared_cpuset = ""
	I0719 04:58:06.673279  163442 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0719 04:58:06.673290  163442 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0719 04:58:06.673299  163442 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0719 04:58:06.673313  163442 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0719 04:58:06.673323  163442 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0719 04:58:06.673333  163442 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0719 04:58:06.673344  163442 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0719 04:58:06.673355  163442 command_runner.go:130] > # enable_criu_support = false
	I0719 04:58:06.673366  163442 command_runner.go:130] > # Enable/disable the generation of the container,
	I0719 04:58:06.673378  163442 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0719 04:58:06.673388  163442 command_runner.go:130] > # enable_pod_events = false
	I0719 04:58:06.673401  163442 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 04:58:06.673414  163442 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0719 04:58:06.673422  163442 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0719 04:58:06.673432  163442 command_runner.go:130] > # default_runtime = "runc"
	I0719 04:58:06.673441  163442 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0719 04:58:06.673457  163442 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0719 04:58:06.673473  163442 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0719 04:58:06.673484  163442 command_runner.go:130] > # creation as a file is not desired either.
	I0719 04:58:06.673501  163442 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0719 04:58:06.673512  163442 command_runner.go:130] > # the hostname is being managed dynamically.
	I0719 04:58:06.673523  163442 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0719 04:58:06.673533  163442 command_runner.go:130] > # ]
	I0719 04:58:06.673545  163442 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0719 04:58:06.673557  163442 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0719 04:58:06.673564  163442 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0719 04:58:06.673569  163442 command_runner.go:130] > # Each entry in the table should follow the format:
	I0719 04:58:06.673573  163442 command_runner.go:130] > #
	I0719 04:58:06.673579  163442 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0719 04:58:06.673587  163442 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0719 04:58:06.673606  163442 command_runner.go:130] > # runtime_type = "oci"
	I0719 04:58:06.673613  163442 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0719 04:58:06.673617  163442 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0719 04:58:06.673624  163442 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0719 04:58:06.673631  163442 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0719 04:58:06.673640  163442 command_runner.go:130] > # monitor_env = []
	I0719 04:58:06.673651  163442 command_runner.go:130] > # privileged_without_host_devices = false
	I0719 04:58:06.673661  163442 command_runner.go:130] > # allowed_annotations = []
	I0719 04:58:06.673669  163442 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0719 04:58:06.673678  163442 command_runner.go:130] > # Where:
	I0719 04:58:06.673687  163442 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0719 04:58:06.673699  163442 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0719 04:58:06.673707  163442 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0719 04:58:06.673713  163442 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0719 04:58:06.673719  163442 command_runner.go:130] > #   in $PATH.
	I0719 04:58:06.673725  163442 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0719 04:58:06.673732  163442 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0719 04:58:06.673738  163442 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0719 04:58:06.673744  163442 command_runner.go:130] > #   state.
	I0719 04:58:06.673750  163442 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0719 04:58:06.673757  163442 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0719 04:58:06.673764  163442 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0719 04:58:06.673769  163442 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0719 04:58:06.673777  163442 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0719 04:58:06.673783  163442 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0719 04:58:06.673789  163442 command_runner.go:130] > #   The currently recognized values are:
	I0719 04:58:06.673795  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0719 04:58:06.673804  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0719 04:58:06.673811  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0719 04:58:06.673817  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0719 04:58:06.673827  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0719 04:58:06.673833  163442 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0719 04:58:06.673841  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0719 04:58:06.673848  163442 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0719 04:58:06.673857  163442 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0719 04:58:06.673862  163442 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0719 04:58:06.673866  163442 command_runner.go:130] > #   deprecated option "conmon".
	I0719 04:58:06.673875  163442 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0719 04:58:06.673881  163442 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0719 04:58:06.673888  163442 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0719 04:58:06.673894  163442 command_runner.go:130] > #   should be moved to the container's cgroup
	I0719 04:58:06.673900  163442 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0719 04:58:06.673907  163442 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0719 04:58:06.673913  163442 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0719 04:58:06.673920  163442 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0719 04:58:06.673923  163442 command_runner.go:130] > #
	I0719 04:58:06.673930  163442 command_runner.go:130] > # Using the seccomp notifier feature:
	I0719 04:58:06.673933  163442 command_runner.go:130] > #
	I0719 04:58:06.673941  163442 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0719 04:58:06.673947  163442 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0719 04:58:06.673952  163442 command_runner.go:130] > #
	I0719 04:58:06.673958  163442 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0719 04:58:06.673964  163442 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0719 04:58:06.673969  163442 command_runner.go:130] > #
	I0719 04:58:06.673974  163442 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0719 04:58:06.673981  163442 command_runner.go:130] > # feature.
	I0719 04:58:06.673983  163442 command_runner.go:130] > #
	I0719 04:58:06.673989  163442 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0719 04:58:06.673997  163442 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0719 04:58:06.674006  163442 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0719 04:58:06.674040  163442 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0719 04:58:06.674048  163442 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0719 04:58:06.674052  163442 command_runner.go:130] > #
	I0719 04:58:06.674057  163442 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0719 04:58:06.674065  163442 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0719 04:58:06.674068  163442 command_runner.go:130] > #
	I0719 04:58:06.674074  163442 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0719 04:58:06.674081  163442 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0719 04:58:06.674084  163442 command_runner.go:130] > #
	I0719 04:58:06.674092  163442 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0719 04:58:06.674098  163442 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0719 04:58:06.674104  163442 command_runner.go:130] > # limitation.
	I0719 04:58:06.674108  163442 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0719 04:58:06.674114  163442 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0719 04:58:06.674118  163442 command_runner.go:130] > runtime_type = "oci"
	I0719 04:58:06.674124  163442 command_runner.go:130] > runtime_root = "/run/runc"
	I0719 04:58:06.674129  163442 command_runner.go:130] > runtime_config_path = ""
	I0719 04:58:06.674135  163442 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0719 04:58:06.674139  163442 command_runner.go:130] > monitor_cgroup = "pod"
	I0719 04:58:06.674143  163442 command_runner.go:130] > monitor_exec_cgroup = ""
	I0719 04:58:06.674147  163442 command_runner.go:130] > monitor_env = [
	I0719 04:58:06.674156  163442 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0719 04:58:06.674161  163442 command_runner.go:130] > ]
	I0719 04:58:06.674166  163442 command_runner.go:130] > privileged_without_host_devices = false
	I0719 04:58:06.674173  163442 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0719 04:58:06.674178  163442 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0719 04:58:06.674185  163442 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0719 04:58:06.674193  163442 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0719 04:58:06.674202  163442 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0719 04:58:06.674208  163442 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0719 04:58:06.674219  163442 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0719 04:58:06.674232  163442 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0719 04:58:06.674240  163442 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0719 04:58:06.674246  163442 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0719 04:58:06.674252  163442 command_runner.go:130] > # Example:
	I0719 04:58:06.674257  163442 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0719 04:58:06.674262  163442 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0719 04:58:06.674269  163442 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0719 04:58:06.674274  163442 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0719 04:58:06.674281  163442 command_runner.go:130] > # cpuset = 0
	I0719 04:58:06.674284  163442 command_runner.go:130] > # cpushares = "0-1"
	I0719 04:58:06.674287  163442 command_runner.go:130] > # Where:
	I0719 04:58:06.674292  163442 command_runner.go:130] > # The workload name is workload-type.
	I0719 04:58:06.674299  163442 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0719 04:58:06.674306  163442 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0719 04:58:06.674311  163442 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0719 04:58:06.674322  163442 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0719 04:58:06.674330  163442 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0719 04:58:06.674336  163442 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0719 04:58:06.674345  163442 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0719 04:58:06.674349  163442 command_runner.go:130] > # Default value is set to true
	I0719 04:58:06.674355  163442 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0719 04:58:06.674360  163442 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0719 04:58:06.674369  163442 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0719 04:58:06.674376  163442 command_runner.go:130] > # Default value is set to 'false'
	I0719 04:58:06.674380  163442 command_runner.go:130] > # disable_hostport_mapping = false
	I0719 04:58:06.674388  163442 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0719 04:58:06.674392  163442 command_runner.go:130] > #
	I0719 04:58:06.674400  163442 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0719 04:58:06.674407  163442 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0719 04:58:06.674415  163442 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0719 04:58:06.674421  163442 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0719 04:58:06.674426  163442 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0719 04:58:06.674429  163442 command_runner.go:130] > [crio.image]
	I0719 04:58:06.674435  163442 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0719 04:58:06.674439  163442 command_runner.go:130] > # default_transport = "docker://"
	I0719 04:58:06.674445  163442 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0719 04:58:06.674450  163442 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0719 04:58:06.674454  163442 command_runner.go:130] > # global_auth_file = ""
	I0719 04:58:06.674458  163442 command_runner.go:130] > # The image used to instantiate infra containers.
	I0719 04:58:06.674463  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.674467  163442 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0719 04:58:06.674473  163442 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0719 04:58:06.674478  163442 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0719 04:58:06.674482  163442 command_runner.go:130] > # This option supports live configuration reload.
	I0719 04:58:06.674486  163442 command_runner.go:130] > # pause_image_auth_file = ""
	I0719 04:58:06.674491  163442 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0719 04:58:06.674496  163442 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0719 04:58:06.674502  163442 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0719 04:58:06.674507  163442 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0719 04:58:06.674511  163442 command_runner.go:130] > # pause_command = "/pause"
	I0719 04:58:06.674516  163442 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0719 04:58:06.674521  163442 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0719 04:58:06.674527  163442 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0719 04:58:06.674532  163442 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0719 04:58:06.674537  163442 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0719 04:58:06.674542  163442 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0719 04:58:06.674545  163442 command_runner.go:130] > # pinned_images = [
	I0719 04:58:06.674548  163442 command_runner.go:130] > # ]
	I0719 04:58:06.674553  163442 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0719 04:58:06.674559  163442 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0719 04:58:06.674564  163442 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0719 04:58:06.674570  163442 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0719 04:58:06.674575  163442 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0719 04:58:06.674578  163442 command_runner.go:130] > # signature_policy = ""
	I0719 04:58:06.674583  163442 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0719 04:58:06.674593  163442 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0719 04:58:06.674598  163442 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0719 04:58:06.674603  163442 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0719 04:58:06.674608  163442 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0719 04:58:06.674612  163442 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0719 04:58:06.674618  163442 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0719 04:58:06.674624  163442 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0719 04:58:06.674627  163442 command_runner.go:130] > # changing them here.
	I0719 04:58:06.674631  163442 command_runner.go:130] > # insecure_registries = [
	I0719 04:58:06.674635  163442 command_runner.go:130] > # ]
	I0719 04:58:06.674640  163442 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0719 04:58:06.674645  163442 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0719 04:58:06.674649  163442 command_runner.go:130] > # image_volumes = "mkdir"
	I0719 04:58:06.674653  163442 command_runner.go:130] > # Temporary directory to use for storing big files
	I0719 04:58:06.674659  163442 command_runner.go:130] > # big_files_temporary_dir = ""
	I0719 04:58:06.674667  163442 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0719 04:58:06.674671  163442 command_runner.go:130] > # CNI plugins.
	I0719 04:58:06.674677  163442 command_runner.go:130] > [crio.network]
	I0719 04:58:06.674683  163442 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0719 04:58:06.674690  163442 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0719 04:58:06.674693  163442 command_runner.go:130] > # cni_default_network = ""
	I0719 04:58:06.674699  163442 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0719 04:58:06.674706  163442 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0719 04:58:06.674711  163442 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0719 04:58:06.674717  163442 command_runner.go:130] > # plugin_dirs = [
	I0719 04:58:06.674721  163442 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0719 04:58:06.674726  163442 command_runner.go:130] > # ]
	I0719 04:58:06.674732  163442 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0719 04:58:06.674737  163442 command_runner.go:130] > [crio.metrics]
	I0719 04:58:06.674743  163442 command_runner.go:130] > # Globally enable or disable metrics support.
	I0719 04:58:06.674749  163442 command_runner.go:130] > enable_metrics = true
	I0719 04:58:06.674754  163442 command_runner.go:130] > # Specify enabled metrics collectors.
	I0719 04:58:06.674761  163442 command_runner.go:130] > # Per default all metrics are enabled.
	I0719 04:58:06.674766  163442 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0719 04:58:06.674775  163442 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0719 04:58:06.674780  163442 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0719 04:58:06.674787  163442 command_runner.go:130] > # metrics_collectors = [
	I0719 04:58:06.674791  163442 command_runner.go:130] > # 	"operations",
	I0719 04:58:06.674798  163442 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0719 04:58:06.674803  163442 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0719 04:58:06.674809  163442 command_runner.go:130] > # 	"operations_errors",
	I0719 04:58:06.674815  163442 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0719 04:58:06.674829  163442 command_runner.go:130] > # 	"image_pulls_by_name",
	I0719 04:58:06.674840  163442 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0719 04:58:06.674848  163442 command_runner.go:130] > # 	"image_pulls_failures",
	I0719 04:58:06.674854  163442 command_runner.go:130] > # 	"image_pulls_successes",
	I0719 04:58:06.674859  163442 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0719 04:58:06.674866  163442 command_runner.go:130] > # 	"image_layer_reuse",
	I0719 04:58:06.674870  163442 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0719 04:58:06.674876  163442 command_runner.go:130] > # 	"containers_oom_total",
	I0719 04:58:06.674885  163442 command_runner.go:130] > # 	"containers_oom",
	I0719 04:58:06.674891  163442 command_runner.go:130] > # 	"processes_defunct",
	I0719 04:58:06.674901  163442 command_runner.go:130] > # 	"operations_total",
	I0719 04:58:06.674908  163442 command_runner.go:130] > # 	"operations_latency_seconds",
	I0719 04:58:06.674916  163442 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0719 04:58:06.674920  163442 command_runner.go:130] > # 	"operations_errors_total",
	I0719 04:58:06.674926  163442 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0719 04:58:06.674930  163442 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0719 04:58:06.674936  163442 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0719 04:58:06.674942  163442 command_runner.go:130] > # 	"image_pulls_success_total",
	I0719 04:58:06.674946  163442 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0719 04:58:06.674953  163442 command_runner.go:130] > # 	"containers_oom_count_total",
	I0719 04:58:06.674957  163442 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0719 04:58:06.674966  163442 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0719 04:58:06.674974  163442 command_runner.go:130] > # ]
	I0719 04:58:06.674983  163442 command_runner.go:130] > # The port on which the metrics server will listen.
	I0719 04:58:06.674993  163442 command_runner.go:130] > # metrics_port = 9090
	I0719 04:58:06.675003  163442 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0719 04:58:06.675011  163442 command_runner.go:130] > # metrics_socket = ""
	I0719 04:58:06.675021  163442 command_runner.go:130] > # The certificate for the secure metrics server.
	I0719 04:58:06.675029  163442 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0719 04:58:06.675035  163442 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0719 04:58:06.675042  163442 command_runner.go:130] > # certificate on any modification event.
	I0719 04:58:06.675046  163442 command_runner.go:130] > # metrics_cert = ""
	I0719 04:58:06.675056  163442 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0719 04:58:06.675065  163442 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0719 04:58:06.675075  163442 command_runner.go:130] > # metrics_key = ""
	I0719 04:58:06.675086  163442 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0719 04:58:06.675095  163442 command_runner.go:130] > [crio.tracing]
	I0719 04:58:06.675104  163442 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0719 04:58:06.675113  163442 command_runner.go:130] > # enable_tracing = false
	I0719 04:58:06.675119  163442 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0719 04:58:06.675125  163442 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0719 04:58:06.675131  163442 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0719 04:58:06.675140  163442 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0719 04:58:06.675149  163442 command_runner.go:130] > # CRI-O NRI configuration.
	I0719 04:58:06.675158  163442 command_runner.go:130] > [crio.nri]
	I0719 04:58:06.675166  163442 command_runner.go:130] > # Globally enable or disable NRI.
	I0719 04:58:06.675174  163442 command_runner.go:130] > # enable_nri = false
	I0719 04:58:06.675183  163442 command_runner.go:130] > # NRI socket to listen on.
	I0719 04:58:06.675191  163442 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0719 04:58:06.675201  163442 command_runner.go:130] > # NRI plugin directory to use.
	I0719 04:58:06.675209  163442 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0719 04:58:06.675217  163442 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0719 04:58:06.675224  163442 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0719 04:58:06.675236  163442 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0719 04:58:06.675246  163442 command_runner.go:130] > # nri_disable_connections = false
	I0719 04:58:06.675256  163442 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0719 04:58:06.675267  163442 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0719 04:58:06.675277  163442 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0719 04:58:06.675286  163442 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0719 04:58:06.675297  163442 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0719 04:58:06.675303  163442 command_runner.go:130] > [crio.stats]
	I0719 04:58:06.675311  163442 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0719 04:58:06.675323  163442 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0719 04:58:06.675332  163442 command_runner.go:130] > # stats_collection_period = 0
	I0719 04:58:06.675362  163442 command_runner.go:130] ! time="2024-07-19 04:58:06.630452421Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0719 04:58:06.675382  163442 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0719 04:58:06.675505  163442 cni.go:84] Creating CNI manager for ""
	I0719 04:58:06.675515  163442 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 04:58:06.675525  163442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:58:06.675558  163442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-270078 NodeName:multinode-270078 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:58:06.675718  163442 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-270078"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:58:06.675791  163442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:58:06.685278  163442 command_runner.go:130] > kubeadm
	I0719 04:58:06.685296  163442 command_runner.go:130] > kubectl
	I0719 04:58:06.685301  163442 command_runner.go:130] > kubelet
	I0719 04:58:06.685391  163442 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:58:06.685443  163442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 04:58:06.694231  163442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0719 04:58:06.709460  163442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:58:06.724369  163442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 04:58:06.739391  163442 ssh_runner.go:195] Run: grep 192.168.39.17	control-plane.minikube.internal$ /etc/hosts
	I0719 04:58:06.742767  163442 command_runner.go:130] > 192.168.39.17	control-plane.minikube.internal
	I0719 04:58:06.742839  163442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:58:06.874255  163442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:58:06.888174  163442 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078 for IP: 192.168.39.17
	I0719 04:58:06.888200  163442 certs.go:194] generating shared ca certs ...
	I0719 04:58:06.888222  163442 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:58:06.888412  163442 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 04:58:06.888465  163442 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 04:58:06.888477  163442 certs.go:256] generating profile certs ...
	I0719 04:58:06.888557  163442 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/client.key
	I0719 04:58:06.888613  163442 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.key.4ebc0a81
	I0719 04:58:06.888645  163442 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.key
	I0719 04:58:06.888655  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:58:06.888667  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:58:06.888680  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:58:06.888692  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:58:06.888705  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:58:06.888715  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:58:06.888726  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:58:06.888747  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:58:06.888805  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 04:58:06.888845  163442 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 04:58:06.888860  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 04:58:06.888884  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 04:58:06.888911  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 04:58:06.888931  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 04:58:06.888969  163442 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 04:58:06.888997  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:06.889010  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem -> /usr/share/ca-certificates/130170.pem
	I0719 04:58:06.889022  163442 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> /usr/share/ca-certificates/1301702.pem
	I0719 04:58:06.889600  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:58:06.911452  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:58:06.933215  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:58:06.954317  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:58:06.976640  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 04:58:06.998183  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:58:07.021306  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:58:07.044551  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/multinode-270078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:58:07.068942  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:58:07.091080  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 04:58:07.112817  163442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 04:58:07.134513  163442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:58:07.150104  163442 ssh_runner.go:195] Run: openssl version
	I0719 04:58:07.155452  163442 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 04:58:07.155598  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 04:58:07.165528  163442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 04:58:07.169339  163442 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:58:07.169447  163442 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 04:58:07.169488  163442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 04:58:07.174419  163442 command_runner.go:130] > 3ec20f2e
	I0719 04:58:07.174609  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:58:07.182999  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:58:07.192616  163442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:07.196646  163442 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:07.196711  163442 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:07.196763  163442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:58:07.201876  163442 command_runner.go:130] > b5213941
	I0719 04:58:07.201920  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:58:07.210330  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 04:58:07.222545  163442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 04:58:07.226735  163442 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:58:07.226805  163442 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 04:58:07.226864  163442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 04:58:07.231996  163442 command_runner.go:130] > 51391683
	I0719 04:58:07.232212  163442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 04:58:07.258560  163442 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:58:07.263354  163442 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:58:07.263373  163442 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 04:58:07.263378  163442 command_runner.go:130] > Device: 253,1	Inode: 5244971     Links: 1
	I0719 04:58:07.263384  163442 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 04:58:07.263392  163442 command_runner.go:130] > Access: 2024-07-19 04:51:24.057666638 +0000
	I0719 04:58:07.263396  163442 command_runner.go:130] > Modify: 2024-07-19 04:51:24.057666638 +0000
	I0719 04:58:07.263401  163442 command_runner.go:130] > Change: 2024-07-19 04:51:24.057666638 +0000
	I0719 04:58:07.263406  163442 command_runner.go:130] >  Birth: 2024-07-19 04:51:24.057666638 +0000
	I0719 04:58:07.263681  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 04:58:07.269136  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.269299  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 04:58:07.274494  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.274667  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 04:58:07.279841  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.279912  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 04:58:07.285055  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.285144  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 04:58:07.290218  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.290286  163442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 04:58:07.295280  163442 command_runner.go:130] > Certificate will not expire
	I0719 04:58:07.295457  163442 kubeadm.go:392] StartCluster: {Name:multinode-270078 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-270078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.199 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.126 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:58:07.295562  163442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 04:58:07.295616  163442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 04:58:07.333440  163442 command_runner.go:130] > 31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee
	I0719 04:58:07.333475  163442 command_runner.go:130] > 9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f
	I0719 04:58:07.333486  163442 command_runner.go:130] > 33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91
	I0719 04:58:07.333493  163442 command_runner.go:130] > 055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1
	I0719 04:58:07.333498  163442 command_runner.go:130] > c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68
	I0719 04:58:07.333503  163442 command_runner.go:130] > 3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884
	I0719 04:58:07.333508  163442 command_runner.go:130] > de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58
	I0719 04:58:07.333521  163442 command_runner.go:130] > 938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401
	I0719 04:58:07.333546  163442 cri.go:89] found id: "31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee"
	I0719 04:58:07.333552  163442 cri.go:89] found id: "9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f"
	I0719 04:58:07.333555  163442 cri.go:89] found id: "33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91"
	I0719 04:58:07.333559  163442 cri.go:89] found id: "055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1"
	I0719 04:58:07.333562  163442 cri.go:89] found id: "c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68"
	I0719 04:58:07.333565  163442 cri.go:89] found id: "3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884"
	I0719 04:58:07.333567  163442 cri.go:89] found id: "de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58"
	I0719 04:58:07.333570  163442 cri.go:89] found id: "938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401"
	I0719 04:58:07.333572  163442 cri.go:89] found id: ""
	I0719 04:58:07.333614  163442 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.166202638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365338166178524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f314ef48-de91-4a00-b5d5-bcfc2b1b851a name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.166719618Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e8a3288d-2531-4d4c-99e8-a1c76f0f66eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.166821711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e8a3288d-2531-4d4c-99e8-a1c76f0f66eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.167203452Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a08f4d6107a23f425f2dec6ef176831d08986018050e0d95ed0f59111e620ec0,PodSandboxId:eea26330c190476847869bd5df5688fa75402f135be5679f2e577cad6c59bb3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721365127076313538,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1,PodSandboxId:e795aa419606052b4db6ce5c9974f75a3ee4df0da51c6bcc5acc459af77697ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721365093548896404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f,PodSandboxId:4e1699fef0b6fa028cc27622ac3e5a29c02818074532e304943e03af1abb0c76,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721365093450690176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799e82ef4e09a2f383bbb0370af8a24d51e91a63fec520568d8163efdaffd593,PodSandboxId:e3e47b38505198709a3f0ebb4ca40bb8ad9576d8f009ea4dbcb9f7c80efa2c9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365093437190045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},An
notations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901,PodSandboxId:987538dbe9a7db0662c2e3a2a227fa56e949890a4123b1d1a5e2d44a7c2dc7cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721365093351546500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f,PodSandboxId:756b92b10f2f3048ac7644d6dbd74183a703634a3b532543bb7c24d1ceca7a66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721365089548589249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string{io.kubernetes.container.hash: f3f88bfe,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0,PodSandboxId:22ec679e11aaeaf036fd196eba5a6a50b474275c822fd21941f52119129654c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721365089538141066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595,PodSandboxId:8f00246d9fae6ab2c3653d8b700535d4928bfaae3bf14c376b2e31fa8ae03ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721365089569680290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e,PodSandboxId:83fe0d136542c98291b58ce36d7a74d5324e3ab081f9a2cc17a7d6ac92f341ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721365089496925240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ea4de1fab0615f814f44cc8f79161a4265145329eced45452330fb85e5635,PodSandboxId:7adc5b1bc87cb9a48cb7fe7967594ee97eb92721e3848105da137442a71de253,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721364773214850203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee,PodSandboxId:d4270eaeb3d13634548801f213164841d460d5fccf3351e1df6e5bde36623b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721364718671026127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f,PodSandboxId:47761801c423190c4d9700bf3061bd45960d09fede8214960a9d6d000763b865,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721364718610626868,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},Annotations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91,PodSandboxId:96e37395172bc78efd464da75623fd3bd30c13e531df67fb20edcf628687be43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721364707185513519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.kubernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1,PodSandboxId:061c80aa4b16f32e20cb66b90d67f709b92448b00c18393ad806cb0ee797a78a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721364706558824833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e
-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884,PodSandboxId:8382f89eeff031a4482e2c2b0935c84950b1ea921b08e0862e1977556b6c3050,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721364687064606936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string
{io.kubernetes.container.hash: f3f88bfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68,PodSandboxId:76d4b46d4d1352567b808f0f117460416443a878cc4fd4daa0dff4d8f1718a9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721364687065435651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58,PodSandboxId:e4b9588e49be18a08b3549453da56926edc2ab71feb8e2c513a70cab6e119305,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721364687055477015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io
.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401,PodSandboxId:5b8ae4e732fa0c50432de04176ed2cbb321f601db7e768d33cf2dc344aab35bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721364686885604940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e8a3288d-2531-4d4c-99e8-a1c76f0f66eb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.210184249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ba61a19-2628-4989-8ca6-2dc8385cd925 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.210258896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ba61a19-2628-4989-8ca6-2dc8385cd925 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.211438020Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c7e3cd5-4d54-47de-8526-47cb8ee36d7f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.211898609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365338211874071,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c7e3cd5-4d54-47de-8526-47cb8ee36d7f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.212408865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee474450-d41c-4788-a40b-9469d848dd9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.212477852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee474450-d41c-4788-a40b-9469d848dd9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.212886724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a08f4d6107a23f425f2dec6ef176831d08986018050e0d95ed0f59111e620ec0,PodSandboxId:eea26330c190476847869bd5df5688fa75402f135be5679f2e577cad6c59bb3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721365127076313538,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1,PodSandboxId:e795aa419606052b4db6ce5c9974f75a3ee4df0da51c6bcc5acc459af77697ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721365093548896404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f,PodSandboxId:4e1699fef0b6fa028cc27622ac3e5a29c02818074532e304943e03af1abb0c76,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721365093450690176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799e82ef4e09a2f383bbb0370af8a24d51e91a63fec520568d8163efdaffd593,PodSandboxId:e3e47b38505198709a3f0ebb4ca40bb8ad9576d8f009ea4dbcb9f7c80efa2c9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365093437190045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},An
notations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901,PodSandboxId:987538dbe9a7db0662c2e3a2a227fa56e949890a4123b1d1a5e2d44a7c2dc7cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721365093351546500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f,PodSandboxId:756b92b10f2f3048ac7644d6dbd74183a703634a3b532543bb7c24d1ceca7a66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721365089548589249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string{io.kubernetes.container.hash: f3f88bfe,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0,PodSandboxId:22ec679e11aaeaf036fd196eba5a6a50b474275c822fd21941f52119129654c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721365089538141066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595,PodSandboxId:8f00246d9fae6ab2c3653d8b700535d4928bfaae3bf14c376b2e31fa8ae03ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721365089569680290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e,PodSandboxId:83fe0d136542c98291b58ce36d7a74d5324e3ab081f9a2cc17a7d6ac92f341ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721365089496925240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ea4de1fab0615f814f44cc8f79161a4265145329eced45452330fb85e5635,PodSandboxId:7adc5b1bc87cb9a48cb7fe7967594ee97eb92721e3848105da137442a71de253,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721364773214850203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee,PodSandboxId:d4270eaeb3d13634548801f213164841d460d5fccf3351e1df6e5bde36623b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721364718671026127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f,PodSandboxId:47761801c423190c4d9700bf3061bd45960d09fede8214960a9d6d000763b865,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721364718610626868,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},Annotations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91,PodSandboxId:96e37395172bc78efd464da75623fd3bd30c13e531df67fb20edcf628687be43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721364707185513519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.kubernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1,PodSandboxId:061c80aa4b16f32e20cb66b90d67f709b92448b00c18393ad806cb0ee797a78a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721364706558824833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e
-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884,PodSandboxId:8382f89eeff031a4482e2c2b0935c84950b1ea921b08e0862e1977556b6c3050,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721364687064606936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string
{io.kubernetes.container.hash: f3f88bfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68,PodSandboxId:76d4b46d4d1352567b808f0f117460416443a878cc4fd4daa0dff4d8f1718a9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721364687065435651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58,PodSandboxId:e4b9588e49be18a08b3549453da56926edc2ab71feb8e2c513a70cab6e119305,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721364687055477015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io
.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401,PodSandboxId:5b8ae4e732fa0c50432de04176ed2cbb321f601db7e768d33cf2dc344aab35bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721364686885604940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee474450-d41c-4788-a40b-9469d848dd9e name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.250713456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=579b241a-bb76-4dbb-90d9-7e6098f22b3a name=/runtime.v1.RuntimeService/Version
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.250835470Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=579b241a-bb76-4dbb-90d9-7e6098f22b3a name=/runtime.v1.RuntimeService/Version
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.252100102Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8bc98d29-038a-4931-b665-72ec3b6fdee4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.252509605Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365338252485752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bc98d29-038a-4931-b665-72ec3b6fdee4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.253217137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=968100ff-b933-4467-9131-93faf4a78ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.253293031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=968100ff-b933-4467-9131-93faf4a78ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.253612639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a08f4d6107a23f425f2dec6ef176831d08986018050e0d95ed0f59111e620ec0,PodSandboxId:eea26330c190476847869bd5df5688fa75402f135be5679f2e577cad6c59bb3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721365127076313538,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1,PodSandboxId:e795aa419606052b4db6ce5c9974f75a3ee4df0da51c6bcc5acc459af77697ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721365093548896404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f,PodSandboxId:4e1699fef0b6fa028cc27622ac3e5a29c02818074532e304943e03af1abb0c76,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721365093450690176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799e82ef4e09a2f383bbb0370af8a24d51e91a63fec520568d8163efdaffd593,PodSandboxId:e3e47b38505198709a3f0ebb4ca40bb8ad9576d8f009ea4dbcb9f7c80efa2c9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365093437190045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},An
notations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901,PodSandboxId:987538dbe9a7db0662c2e3a2a227fa56e949890a4123b1d1a5e2d44a7c2dc7cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721365093351546500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f,PodSandboxId:756b92b10f2f3048ac7644d6dbd74183a703634a3b532543bb7c24d1ceca7a66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721365089548589249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string{io.kubernetes.container.hash: f3f88bfe,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0,PodSandboxId:22ec679e11aaeaf036fd196eba5a6a50b474275c822fd21941f52119129654c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721365089538141066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595,PodSandboxId:8f00246d9fae6ab2c3653d8b700535d4928bfaae3bf14c376b2e31fa8ae03ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721365089569680290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e,PodSandboxId:83fe0d136542c98291b58ce36d7a74d5324e3ab081f9a2cc17a7d6ac92f341ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721365089496925240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ea4de1fab0615f814f44cc8f79161a4265145329eced45452330fb85e5635,PodSandboxId:7adc5b1bc87cb9a48cb7fe7967594ee97eb92721e3848105da137442a71de253,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721364773214850203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee,PodSandboxId:d4270eaeb3d13634548801f213164841d460d5fccf3351e1df6e5bde36623b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721364718671026127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f,PodSandboxId:47761801c423190c4d9700bf3061bd45960d09fede8214960a9d6d000763b865,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721364718610626868,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},Annotations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91,PodSandboxId:96e37395172bc78efd464da75623fd3bd30c13e531df67fb20edcf628687be43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721364707185513519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.kubernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1,PodSandboxId:061c80aa4b16f32e20cb66b90d67f709b92448b00c18393ad806cb0ee797a78a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721364706558824833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e
-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884,PodSandboxId:8382f89eeff031a4482e2c2b0935c84950b1ea921b08e0862e1977556b6c3050,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721364687064606936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string
{io.kubernetes.container.hash: f3f88bfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68,PodSandboxId:76d4b46d4d1352567b808f0f117460416443a878cc4fd4daa0dff4d8f1718a9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721364687065435651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58,PodSandboxId:e4b9588e49be18a08b3549453da56926edc2ab71feb8e2c513a70cab6e119305,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721364687055477015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io
.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401,PodSandboxId:5b8ae4e732fa0c50432de04176ed2cbb321f601db7e768d33cf2dc344aab35bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721364686885604940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=968100ff-b933-4467-9131-93faf4a78ee9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.292707167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=409852d5-9997-4ce8-8b90-091f32ba0e70 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.292824729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=409852d5-9997-4ce8-8b90-091f32ba0e70 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.298238842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d3361de-3fb9-4e40-a6dc-6abecc227ccc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.298628539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365338298606661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133267,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d3361de-3fb9-4e40-a6dc-6abecc227ccc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.299259577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4881f67-2a0d-49d6-b822-cfff75b8effc name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.299328830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4881f67-2a0d-49d6-b822-cfff75b8effc name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:02:18 multinode-270078 crio[2825]: time="2024-07-19 05:02:18.299651457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a08f4d6107a23f425f2dec6ef176831d08986018050e0d95ed0f59111e620ec0,PodSandboxId:eea26330c190476847869bd5df5688fa75402f135be5679f2e577cad6c59bb3d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1721365127076313538,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1,PodSandboxId:e795aa419606052b4db6ce5c9974f75a3ee4df0da51c6bcc5acc459af77697ab,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_RUNNING,CreatedAt:1721365093548896404,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f,PodSandboxId:4e1699fef0b6fa028cc27622ac3e5a29c02818074532e304943e03af1abb0c76,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721365093450690176,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:799e82ef4e09a2f383bbb0370af8a24d51e91a63fec520568d8163efdaffd593,PodSandboxId:e3e47b38505198709a3f0ebb4ca40bb8ad9576d8f009ea4dbcb9f7c80efa2c9d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365093437190045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},An
notations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901,PodSandboxId:987538dbe9a7db0662c2e3a2a227fa56e949890a4123b1d1a5e2d44a7c2dc7cc,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1721365093351546500,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f,PodSandboxId:756b92b10f2f3048ac7644d6dbd74183a703634a3b532543bb7c24d1ceca7a66,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1721365089548589249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string{io.kubernetes.container.hash: f3f88bfe,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0,PodSandboxId:22ec679e11aaeaf036fd196eba5a6a50b474275c822fd21941f52119129654c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1721365089538141066,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595,PodSandboxId:8f00246d9fae6ab2c3653d8b700535d4928bfaae3bf14c376b2e31fa8ae03ff2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1721365089569680290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e,PodSandboxId:83fe0d136542c98291b58ce36d7a74d5324e3ab081f9a2cc17a7d6ac92f341ba,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1721365089496925240,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af4ea4de1fab0615f814f44cc8f79161a4265145329eced45452330fb85e5635,PodSandboxId:7adc5b1bc87cb9a48cb7fe7967594ee97eb92721e3848105da137442a71de253,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1721364773214850203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-hnr7x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c62f9d80-8985-4a63-88b5-587470389f71,},Annotations:map[string]string{io.kubernetes.container.hash: 368e1ca1,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee,PodSandboxId:d4270eaeb3d13634548801f213164841d460d5fccf3351e1df6e5bde36623b4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721364718671026127,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vgprr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 43168421-b0df-4c84-b04a-7d1546c9a743,},Annotations:map[string]string{io.kubernetes.container.hash: dd796cb9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479115923e1476f87d4f9ff0aaa9428a706524e9af8230c485a77b61c88b38f,PodSandboxId:47761801c423190c4d9700bf3061bd45960d09fede8214960a9d6d000763b865,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721364718610626868,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d6b93c21-dfc9-4700-b89e-075132f74950,},Annotations:map[string]string{io.kubernetes.container.hash: 3366fd0e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91,PodSandboxId:96e37395172bc78efd464da75623fd3bd30c13e531df67fb20edcf628687be43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1721364707185513519,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7qj9p,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4,},Annotations:map[string]string{io.kubernetes.container.hash: f6f1eb61,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1,PodSandboxId:061c80aa4b16f32e20cb66b90d67f709b92448b00c18393ad806cb0ee797a78a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,State:CONTAINER_EXITED,CreatedAt:1721364706558824833,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzrm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a11e3057-6b32-41a1-ac4e
-7d8d225d7daa,},Annotations:map[string]string{io.kubernetes.container.hash: 60eaca83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884,PodSandboxId:8382f89eeff031a4482e2c2b0935c84950b1ea921b08e0862e1977556b6c3050,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1721364687064606936,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ed2ccd1a8875c16a07e0a333adf6d38,},Annotations:map[string]string
{io.kubernetes.container.hash: f3f88bfe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68,PodSandboxId:76d4b46d4d1352567b808f0f117460416443a878cc4fd4daa0dff4d8f1718a9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1721364687065435651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1059e4f824c05230999a9ad26e02cc8d,},Annotations:map[s
tring]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58,PodSandboxId:e4b9588e49be18a08b3549453da56926edc2ab71feb8e2c513a70cab6e119305,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1721364687055477015,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cb3fdfc2ee14ec86fba08207580f105,},Annotations:map[string]string{io
.kubernetes.container.hash: cacf0bef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401,PodSandboxId:5b8ae4e732fa0c50432de04176ed2cbb321f601db7e768d33cf2dc344aab35bd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1721364686885604940,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-270078,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151ad1a1872f30489054dde392cad73d,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4881f67-2a0d-49d6-b822-cfff75b8effc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a08f4d6107a23       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   eea26330c1904       busybox-fc5497c4f-hnr7x
	9be01f79f8da3       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      4 minutes ago       Running             kindnet-cni               1                   e795aa4196060       kindnet-fzrm8
	000677979f1c8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   4e1699fef0b6f       coredns-7db6d8ff4d-vgprr
	799e82ef4e09a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   e3e47b3850519       storage-provisioner
	c9fb3de46f094       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   987538dbe9a7d       kube-proxy-7qj9p
	a38317f7436fa       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   8f00246d9fae6       kube-apiserver-multinode-270078
	133cfe56953f0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   756b92b10f2f3       etcd-multinode-270078
	c4243cd449294       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   22ec679e11aae       kube-scheduler-multinode-270078
	4c87c6e535084       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   83fe0d136542c       kube-controller-manager-multinode-270078
	af4ea4de1fab0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   7adc5b1bc87cb       busybox-fc5497c4f-hnr7x
	31fffbf0c5d39       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   d4270eaeb3d13       coredns-7db6d8ff4d-vgprr
	9479115923e14       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   47761801c4231       storage-provisioner
	33b69ea0ad2f4       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   96e37395172bc       kube-proxy-7qj9p
	055cf104d6bcd       5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f                                      10 minutes ago      Exited              kindnet-cni               0                   061c80aa4b16f       kindnet-fzrm8
	c4ed35a688d46       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   76d4b46d4d135       kube-controller-manager-multinode-270078
	3a6ddcbf56243       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   8382f89eeff03       etcd-multinode-270078
	de944624d060c       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   e4b9588e49be1       kube-apiserver-multinode-270078
	938b8fa47de5b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   5b8ae4e732fa0       kube-scheduler-multinode-270078
	
	
	==> coredns [000677979f1c8e706c4b61f687d666974a63370255435878cad64b8411de5e6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56511 - 26371 "HINFO IN 5389642236483648416.4908493452828383574. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012349334s
	
	
	==> coredns [31fffbf0c5d39c2cae58134d93f717c029a04143d8a407271e51a3eee5d53fee] <==
	[INFO] 10.244.1.2:56378 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001841418s
	[INFO] 10.244.1.2:42241 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190661s
	[INFO] 10.244.1.2:50886 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000074374s
	[INFO] 10.244.1.2:38850 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001348101s
	[INFO] 10.244.1.2:55758 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000067584s
	[INFO] 10.244.1.2:33739 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056266s
	[INFO] 10.244.1.2:58724 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066024s
	[INFO] 10.244.0.3:45855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102308s
	[INFO] 10.244.0.3:50514 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000062647s
	[INFO] 10.244.0.3:42290 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073421s
	[INFO] 10.244.0.3:51562 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000045836s
	[INFO] 10.244.1.2:38503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093476s
	[INFO] 10.244.1.2:34185 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000072618s
	[INFO] 10.244.1.2:37438 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005982s
	[INFO] 10.244.1.2:60714 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049414s
	[INFO] 10.244.0.3:49543 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111061s
	[INFO] 10.244.0.3:46617 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000194799s
	[INFO] 10.244.0.3:37021 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127817s
	[INFO] 10.244.0.3:52002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000105369s
	[INFO] 10.244.1.2:38508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157328s
	[INFO] 10.244.1.2:53991 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099448s
	[INFO] 10.244.1.2:56096 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070912s
	[INFO] 10.244.1.2:56089 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072844s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-270078
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-270078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-270078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_51_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:51:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-270078
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 05:02:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:58:12 +0000   Fri, 19 Jul 2024 04:51:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:58:12 +0000   Fri, 19 Jul 2024 04:51:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:58:12 +0000   Fri, 19 Jul 2024 04:51:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:58:12 +0000   Fri, 19 Jul 2024 04:51:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    multinode-270078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ed569a1cbcf4e7c9997772206799d49
	  System UUID:                4ed569a1-cbcf-4e7c-9997-772206799d49
	  Boot ID:                    ad789f78-98f7-47e5-9dc4-82f6628b4d18
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hnr7x                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m28s
	  kube-system                 coredns-7db6d8ff4d-vgprr                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-270078                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-fzrm8                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-270078             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-270078    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-7qj9p                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-270078             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 10m                   kube-proxy       
	  Normal  Starting                 4m4s                  kube-proxy       
	  Normal  Starting                 10m                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     10m                   kubelet          Node multinode-270078 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m                   kubelet          Node multinode-270078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                   kubelet          Node multinode-270078 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                   node-controller  Node multinode-270078 event: Registered Node multinode-270078 in Controller
	  Normal  NodeReady                10m                   kubelet          Node multinode-270078 status is now: NodeReady
	  Normal  Starting                 4m10s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m10s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m10s)  kubelet          Node multinode-270078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m10s)  kubelet          Node multinode-270078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m10s)  kubelet          Node multinode-270078 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m53s                 node-controller  Node multinode-270078 event: Registered Node multinode-270078 in Controller
	
	
	Name:               multinode-270078-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-270078-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-270078
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_58_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:58:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-270078-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:59:55 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 04:59:24 +0000   Fri, 19 Jul 2024 05:00:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 04:59:24 +0000   Fri, 19 Jul 2024 05:00:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 04:59:24 +0000   Fri, 19 Jul 2024 05:00:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 04:59:24 +0000   Fri, 19 Jul 2024 05:00:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    multinode-270078-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3de61bf841c647cb861852b49725b4e3
	  System UUID:                3de61bf8-41c6-47cb-8618-52b49725b4e3
	  Boot ID:                    0458ebc9-b9d4-4c03-8f36-f91d3b59ce87
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hps86    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-ctdvf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m51s
	  kube-system                 kube-proxy-6xrft           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m52s (x2 over 9m52s)  kubelet          Node multinode-270078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m52s (x2 over 9m52s)  kubelet          Node multinode-270078-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m52s (x2 over 9m52s)  kubelet          Node multinode-270078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m31s                  kubelet          Node multinode-270078-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-270078-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-270078-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-270078-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-270078-m02 status is now: NodeReady
	  Normal  NodeNotReady             98s                    node-controller  Node multinode-270078-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.060006] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058057] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.184476] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.101561] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.244264] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.854866] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +3.504684] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.055979] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.980793] systemd-fstab-generator[1258]: Ignoring "noauto" option for root device
	[  +0.086386] kauditd_printk_skb: 69 callbacks suppressed
	[  +7.165512] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.897627] systemd-fstab-generator[1447]: Ignoring "noauto" option for root device
	[ +12.615312] kauditd_printk_skb: 60 callbacks suppressed
	[Jul19 04:52] kauditd_printk_skb: 14 callbacks suppressed
	[Jul19 04:58] systemd-fstab-generator[2743]: Ignoring "noauto" option for root device
	[  +0.132436] systemd-fstab-generator[2755]: Ignoring "noauto" option for root device
	[  +0.172438] systemd-fstab-generator[2769]: Ignoring "noauto" option for root device
	[  +0.156709] systemd-fstab-generator[2781]: Ignoring "noauto" option for root device
	[  +0.298704] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +0.987039] systemd-fstab-generator[2907]: Ignoring "noauto" option for root device
	[  +1.875137] systemd-fstab-generator[3030]: Ignoring "noauto" option for root device
	[  +4.652277] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.793761] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.454092] systemd-fstab-generator[3849]: Ignoring "noauto" option for root device
	[ +17.478687] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [133cfe56953f01460a9fa4494092d188466178166e2dcfe71035b5d6b7545e8f] <==
	{"level":"info","ts":"2024-07-19T04:58:10.096186Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T04:58:10.096213Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T04:58:10.09645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 switched to configuration voters=(2455236677277094933)"}
	{"level":"info","ts":"2024-07-19T04:58:10.096532Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3ecd98d5111bce24","local-member-id":"2212c0bfe49c9415","added-peer-id":"2212c0bfe49c9415","added-peer-peer-urls":["https://192.168.39.17:2380"]}
	{"level":"info","ts":"2024-07-19T04:58:10.096662Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3ecd98d5111bce24","local-member-id":"2212c0bfe49c9415","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:58:10.096703Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:58:10.106101Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T04:58:10.106298Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2212c0bfe49c9415","initial-advertise-peer-urls":["https://192.168.39.17:2380"],"listen-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.17:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T04:58:10.106341Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T04:58:10.106492Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-07-19T04:58:10.106513Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-07-19T04:58:11.345313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T04:58:11.345365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T04:58:11.345403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 received MsgPreVoteResp from 2212c0bfe49c9415 at term 2"}
	{"level":"info","ts":"2024-07-19T04:58:11.345418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T04:58:11.345424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 received MsgVoteResp from 2212c0bfe49c9415 at term 3"}
	{"level":"info","ts":"2024-07-19T04:58:11.345447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T04:58:11.345459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2212c0bfe49c9415 elected leader 2212c0bfe49c9415 at term 3"}
	{"level":"info","ts":"2024-07-19T04:58:11.350363Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"2212c0bfe49c9415","local-member-attributes":"{Name:multinode-270078 ClientURLs:[https://192.168.39.17:2379]}","request-path":"/0/members/2212c0bfe49c9415/attributes","cluster-id":"3ecd98d5111bce24","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T04:58:11.350497Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:58:11.350595Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:58:11.351857Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T04:58:11.351909Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T04:58:11.352382Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T04:58:11.353364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.17:2379"}
	
	
	==> etcd [3a6ddcbf56243021d5e0de54495d06019da39ed37ffac91a2b4f42cd4eae8884] <==
	{"level":"info","ts":"2024-07-19T04:52:27.048835Z","caller":"traceutil/trace.go:171","msg":"trace[1709312435] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"197.153757ms","start":"2024-07-19T04:52:26.851668Z","end":"2024-07-19T04:52:27.048821Z","steps":["trace[1709312435] 'process raft request'  (duration: 191.608229ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:53:20.894632Z","caller":"traceutil/trace.go:171","msg":"trace[1137693780] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"234.296483ms","start":"2024-07-19T04:53:20.660311Z","end":"2024-07-19T04:53:20.894608Z","steps":["trace[1137693780] 'process raft request'  (duration: 234.209972ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:53:20.901055Z","caller":"traceutil/trace.go:171","msg":"trace[586278936] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"137.391031ms","start":"2024-07-19T04:53:20.763652Z","end":"2024-07-19T04:53:20.901043Z","steps":["trace[586278936] 'read index received'  (duration: 131.243785ms)","trace[586278936] 'applied index is now lower than readState.Index'  (duration: 6.146603ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:53:20.901215Z","caller":"traceutil/trace.go:171","msg":"trace[507960596] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"174.48987ms","start":"2024-07-19T04:53:20.726717Z","end":"2024-07-19T04:53:20.901207Z","steps":["trace[507960596] 'process raft request'  (duration: 174.265438ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:53:20.901424Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.757845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-270078-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-19T04:53:20.901506Z","caller":"traceutil/trace.go:171","msg":"trace[621944805] range","detail":"{range_begin:/registry/minions/multinode-270078-m03; range_end:; response_count:1; response_revision:573; }","duration":"137.836783ms","start":"2024-07-19T04:53:20.763627Z","end":"2024-07-19T04:53:20.901464Z","steps":["trace[621944805] 'agreement among raft nodes before linearized reading'  (duration: 137.712914ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:53:31.246634Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.20408ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10670594086507534256 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.17\" mod_revision:596 > success:<request_put:<key:\"/registry/masterleases/192.168.39.17\" value_size:66 lease:1447222049652758446 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.17\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T04:53:31.24684Z","caller":"traceutil/trace.go:171","msg":"trace[1315389735] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"155.845452ms","start":"2024-07-19T04:53:31.090983Z","end":"2024-07-19T04:53:31.246829Z","steps":["trace[1315389735] 'process raft request'  (duration: 155.753682ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:53:31.247062Z","caller":"traceutil/trace.go:171","msg":"trace[824638523] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"219.028211ms","start":"2024-07-19T04:53:31.028025Z","end":"2024-07-19T04:53:31.247053Z","steps":["trace[824638523] 'process raft request'  (duration: 86.219621ms)","trace[824638523] 'compare'  (duration: 132.106337ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:53:31.440405Z","caller":"traceutil/trace.go:171","msg":"trace[469390373] linearizableReadLoop","detail":"{readStateIndex:670; appliedIndex:669; }","duration":"191.833488ms","start":"2024-07-19T04:53:31.248536Z","end":"2024-07-19T04:53:31.440369Z","steps":["trace[469390373] 'read index received'  (duration: 125.675151ms)","trace[469390373] 'applied index is now lower than readState.Index'  (duration: 66.157587ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:53:31.440517Z","caller":"traceutil/trace.go:171","msg":"trace[86144273] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"233.16083ms","start":"2024-07-19T04:53:31.207344Z","end":"2024-07-19T04:53:31.440505Z","steps":["trace[86144273] 'process raft request'  (duration: 166.925169ms)","trace[86144273] 'compare'  (duration: 65.691605ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T04:53:31.441061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.50841ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" ","response":"range_response_count:1 size:421"}
	{"level":"info","ts":"2024-07-19T04:53:31.444317Z","caller":"traceutil/trace.go:171","msg":"trace[636042706] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:626; }","duration":"195.750529ms","start":"2024-07-19T04:53:31.248515Z","end":"2024-07-19T04:53:31.444266Z","steps":["trace[636042706] 'agreement among raft nodes before linearized reading'  (duration: 192.489226ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:53:31.780628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.220306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T04:53:31.781261Z","caller":"traceutil/trace.go:171","msg":"trace[1409875192] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:627; }","duration":"234.911295ms","start":"2024-07-19T04:53:31.546334Z","end":"2024-07-19T04:53:31.781245Z","steps":["trace[1409875192] 'range keys from in-memory index tree'  (duration: 234.176498ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:56:33.813585Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T04:56:33.813691Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-270078","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	{"level":"warn","ts":"2024-07-19T04:56:33.813806Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:56:33.813887Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:56:33.889594Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:56:33.889679Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T04:56:33.891335Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2212c0bfe49c9415","current-leader-member-id":"2212c0bfe49c9415"}
	{"level":"info","ts":"2024-07-19T04:56:33.89385Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-07-19T04:56:33.894291Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-07-19T04:56:33.89436Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-270078","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	
	
	==> kernel <==
	 05:02:18 up 11 min,  0 users,  load average: 0.18, 0.16, 0.10
	Linux multinode-270078 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [055cf104d6bcd94aa209fcb410e05f96ce191340f62eecad3826a7ada7b521d1] <==
	I0719 04:55:47.577956       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	I0719 04:55:57.576630       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:55:57.576689       1 main.go:303] handling current node
	I0719 04:55:57.576708       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:55:57.576715       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:55:57.576913       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:55:57.576943       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	I0719 04:56:07.585783       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:56:07.585911       1 main.go:303] handling current node
	I0719 04:56:07.585940       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:56:07.585958       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:56:07.586082       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:56:07.586104       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	I0719 04:56:17.585481       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:56:17.585524       1 main.go:303] handling current node
	I0719 04:56:17.585537       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:56:17.585556       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:56:17.585705       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:56:17.585726       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	I0719 04:56:27.585835       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 04:56:27.585878       1 main.go:303] handling current node
	I0719 04:56:27.585892       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 04:56:27.585897       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 04:56:27.586008       1 main.go:299] Handling node with IPs: map[192.168.39.126:{}]
	I0719 04:56:27.586027       1 main.go:326] Node multinode-270078-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [9be01f79f8da3ef0bc186f7447b4204d4d63a6ccac0071192ac76a67625560d1] <==
	I0719 05:01:14.382854       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 05:01:24.382572       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 05:01:24.382620       1 main.go:303] handling current node
	I0719 05:01:24.382639       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 05:01:24.382645       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 05:01:34.388736       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 05:01:34.388833       1 main.go:303] handling current node
	I0719 05:01:34.388861       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 05:01:34.388867       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 05:01:44.382175       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 05:01:44.382383       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 05:01:44.382572       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 05:01:44.382614       1 main.go:303] handling current node
	I0719 05:01:54.390711       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 05:01:54.390807       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 05:01:54.391002       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 05:01:54.391026       1 main.go:303] handling current node
	I0719 05:02:04.383886       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 05:02:04.383999       1 main.go:303] handling current node
	I0719 05:02:04.384030       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 05:02:04.384049       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	I0719 05:02:14.382454       1 main.go:299] Handling node with IPs: map[192.168.39.17:{}]
	I0719 05:02:14.382658       1 main.go:303] handling current node
	I0719 05:02:14.382691       1 main.go:299] Handling node with IPs: map[192.168.39.199:{}]
	I0719 05:02:14.382740       1 main.go:326] Node multinode-270078-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a38317f7436fad30424d945b8be343738b549bd1c74e0a51e050bdf48209a595] <==
	I0719 04:58:12.597474       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 04:58:12.604610       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 04:58:12.605479       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 04:58:12.606473       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 04:58:12.607203       1 aggregator.go:165] initial CRD sync complete...
	I0719 04:58:12.607213       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 04:58:12.607218       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 04:58:12.607223       1 cache.go:39] Caches are synced for autoregister controller
	I0719 04:58:12.607440       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 04:58:12.609943       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 04:58:12.610476       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 04:58:12.610499       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0719 04:58:12.613431       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 04:58:12.614072       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 04:58:12.620822       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 04:58:12.620854       1 policy_source.go:224] refreshing policies
	I0719 04:58:12.656479       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 04:58:13.529707       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 04:58:14.360204       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 04:58:14.466192       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 04:58:14.479389       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 04:58:14.539005       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 04:58:14.548177       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 04:58:25.066388       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 04:58:25.068073       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [de944624d060c786278833c561aff05831f19ee086f8a1db3bcd28573b7cfd58] <==
	W0719 04:56:33.833464       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833494       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833567       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833597       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833623       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833669       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.833701       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.840494       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.844241       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.844301       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.844349       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.844381       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.845862       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.845923       1 logging.go:59] [core] [Channel #25 SubChannel #26] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.845971       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846015       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846058       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846102       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846135       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846185       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846235       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.846279       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.848118       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.848919       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:56:33.849016       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [4c87c6e53508415123b14762a3523c4e36d53f47c3607e4db1618bd3d9d3792e] <==
	I0719 04:58:53.860815       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m02\" does not exist"
	I0719 04:58:53.879304       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m02" podCIDRs=["10.244.1.0/24"]
	I0719 04:58:55.745253       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.968µs"
	I0719 04:58:55.787907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="98.187µs"
	I0719 04:58:55.795545       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.137µs"
	I0719 04:58:55.814839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="270.387µs"
	I0719 04:58:55.822279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.927µs"
	I0719 04:58:55.825795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.201µs"
	I0719 04:59:13.212583       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:59:13.231839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.873µs"
	I0719 04:59:13.244901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.375µs"
	I0719 04:59:16.250081       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.970812ms"
	I0719 04:59:16.250299       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="114.896µs"
	I0719 04:59:31.064113       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:59:32.489123       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m03\" does not exist"
	I0719 04:59:32.489226       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:59:32.499673       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m03" podCIDRs=["10.244.2.0/24"]
	I0719 04:59:51.977162       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:59:57.183462       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 05:00:40.157079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.870397ms"
	I0719 05:00:40.158889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.593µs"
	I0719 05:00:45.068258       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-88rhc"
	I0719 05:00:45.095937       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-88rhc"
	I0719 05:00:45.095973       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-t666c"
	I0719 05:00:45.117787       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-t666c"
	
	
	==> kube-controller-manager [c4ed35a688d466e50ef053719ac811f72487848d9a77bb399a22fe1e445c6a68] <==
	I0719 04:52:27.046037       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m02\" does not exist"
	I0719 04:52:27.060958       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m02" podCIDRs=["10.244.1.0/24"]
	I0719 04:52:29.737457       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-270078-m02"
	I0719 04:52:47.631114       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:52:50.163335       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.759648ms"
	I0719 04:52:50.174045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.566045ms"
	I0719 04:52:50.176321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.322µs"
	I0719 04:52:50.176906       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.392µs"
	I0719 04:52:53.389146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.634653ms"
	I0719 04:52:53.389229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.183µs"
	I0719 04:52:53.935390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.242966ms"
	I0719 04:52:53.935600       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.99µs"
	I0719 04:53:20.902858       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:53:20.904326       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m03\" does not exist"
	I0719 04:53:20.970525       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m03" podCIDRs=["10.244.2.0/24"]
	I0719 04:53:24.756470       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-270078-m03"
	I0719 04:53:39.862160       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:54:08.064042       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:54:09.136417       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-270078-m03\" does not exist"
	I0719 04:54:09.138857       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:54:09.154083       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-270078-m03" podCIDRs=["10.244.3.0/24"]
	I0719 04:54:28.376714       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:55:09.814512       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-270078-m02"
	I0719 04:55:09.881601       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.133543ms"
	I0719 04:55:09.881830       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.714µs"
	
	
	==> kube-proxy [33b69ea0ad2f4ced8ca5a9cbb00cd82cee4d47163212947312b7db626ee10f91] <==
	I0719 04:51:47.300732       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:51:47.311982       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	I0719 04:51:47.342717       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:51:47.342806       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:51:47.342821       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:51:47.344929       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:51:47.345116       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:51:47.345136       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:51:47.346275       1 config.go:192] "Starting service config controller"
	I0719 04:51:47.346335       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:51:47.346357       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:51:47.346361       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:51:47.346828       1 config.go:319] "Starting node config controller"
	I0719 04:51:47.346848       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:51:47.446515       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:51:47.446562       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:51:47.447401       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c9fb3de46f094b0d0a667b70372c5d21aa341c4924b29818f0d8c37a44214901] <==
	I0719 04:58:13.652793       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:58:13.708803       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	I0719 04:58:13.754382       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:58:13.754423       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:58:13.754439       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:58:13.756935       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:58:13.757698       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:58:13.757834       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:58:13.761172       1 config.go:192] "Starting service config controller"
	I0719 04:58:13.761230       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:58:13.761273       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:58:13.761290       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:58:13.761919       1 config.go:319] "Starting node config controller"
	I0719 04:58:13.761977       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:58:13.862318       1 shared_informer.go:320] Caches are synced for node config
	I0719 04:58:13.862868       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:58:13.862937       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [938b8fa47de5bc6b50fc4dc1842ace7580870e225f15d76f6b4e6dce2fc79401] <==
	E0719 04:51:29.399576       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:29.399673       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 04:51:29.399699       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:29.399799       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:51:29.399827       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:51:29.399943       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 04:51:29.399965       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 04:51:29.400819       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:51:29.401800       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:30.304640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 04:51:30.304694       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 04:51:30.308744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:51:30.308828       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 04:51:30.441651       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 04:51:30.441691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:30.461803       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 04:51:30.461871       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 04:51:30.563400       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:51:30.563475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:51:30.620123       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 04:51:30.620276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 04:51:30.859107       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 04:51:30.859361       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0719 04:51:33.088331       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 04:56:33.823981       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c4243cd4492945493dffe7402237b7f0a5227fd3901c70b61f08f4914c3fb9e0] <==
	I0719 04:58:10.793343       1 serving.go:380] Generated self-signed cert in-memory
	W0719 04:58:12.560976       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 04:58:12.561125       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 04:58:12.561155       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 04:58:12.561220       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 04:58:12.580911       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 04:58:12.581016       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:58:12.582686       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 04:58:12.582727       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 04:58:12.583087       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 04:58:12.583133       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 04:58:12.684344       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854729    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a11e3057-6b32-41a1-ac4e-7d8d225d7daa-cni-cfg\") pod \"kindnet-fzrm8\" (UID: \"a11e3057-6b32-41a1-ac4e-7d8d225d7daa\") " pod="kube-system/kindnet-fzrm8"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854776    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a11e3057-6b32-41a1-ac4e-7d8d225d7daa-xtables-lock\") pod \"kindnet-fzrm8\" (UID: \"a11e3057-6b32-41a1-ac4e-7d8d225d7daa\") " pod="kube-system/kindnet-fzrm8"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854794    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d6b93c21-dfc9-4700-b89e-075132f74950-tmp\") pod \"storage-provisioner\" (UID: \"d6b93c21-dfc9-4700-b89e-075132f74950\") " pod="kube-system/storage-provisioner"
	Jul 19 04:58:12 multinode-270078 kubelet[3037]: I0719 04:58:12.854807    3037 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4-lib-modules\") pod \"kube-proxy-7qj9p\" (UID: \"1361f5f1-8094-4ca4-b6b9-3a104ad7d9a4\") " pod="kube-system/kube-proxy-7qj9p"
	Jul 19 04:58:18 multinode-270078 kubelet[3037]: I0719 04:58:18.191114    3037 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 19 04:59:08 multinode-270078 kubelet[3037]: E0719 04:59:08.918537    3037 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:59:08 multinode-270078 kubelet[3037]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:59:08 multinode-270078 kubelet[3037]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:59:08 multinode-270078 kubelet[3037]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:59:08 multinode-270078 kubelet[3037]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 05:00:08 multinode-270078 kubelet[3037]: E0719 05:00:08.920203    3037 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 05:00:08 multinode-270078 kubelet[3037]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 05:00:08 multinode-270078 kubelet[3037]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 05:00:08 multinode-270078 kubelet[3037]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 05:00:08 multinode-270078 kubelet[3037]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 05:01:08 multinode-270078 kubelet[3037]: E0719 05:01:08.920865    3037 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 05:01:08 multinode-270078 kubelet[3037]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 05:01:08 multinode-270078 kubelet[3037]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 05:01:08 multinode-270078 kubelet[3037]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 05:01:08 multinode-270078 kubelet[3037]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 05:02:08 multinode-270078 kubelet[3037]: E0719 05:02:08.919653    3037 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 05:02:08 multinode-270078 kubelet[3037]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 05:02:08 multinode-270078 kubelet[3037]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 05:02:08 multinode-270078 kubelet[3037]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 05:02:08 multinode-270078 kubelet[3037]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 05:02:17.911412  165338 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19302-122995/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-270078 -n multinode-270078
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-270078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.33s)

                                                
                                    
x
+
TestPreload (186.3s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-332657 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0719 05:06:36.835630  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-332657 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m49.422327349s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-332657 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-332657 image pull gcr.io/k8s-minikube/busybox: (2.753713865s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-332657
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-332657: (6.489610797s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-332657 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-332657 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.565883485s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-332657 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-19 05:09:14.653967014 +0000 UTC m=+5519.808752591
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-332657 -n test-preload-332657
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-332657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-332657 logs -n 25: (1.099060836s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078 sudo cat                                       | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m03_multinode-270078.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt                       | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m02:/home/docker/cp-test_multinode-270078-m03_multinode-270078-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n                                                                 | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | multinode-270078-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-270078 ssh -n multinode-270078-m02 sudo cat                                   | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | /home/docker/cp-test_multinode-270078-m03_multinode-270078-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-270078 node stop m03                                                          | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	| node    | multinode-270078 node start                                                             | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:54 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-270078                                                                | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:54 UTC |                     |
	| stop    | -p multinode-270078                                                                     | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:54 UTC |                     |
	| start   | -p multinode-270078                                                                     | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:56 UTC | 19 Jul 24 04:59 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-270078                                                                | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:59 UTC |                     |
	| node    | multinode-270078 node delete                                                            | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:59 UTC | 19 Jul 24 04:59 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-270078 stop                                                                   | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 04:59 UTC |                     |
	| start   | -p multinode-270078                                                                     | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 05:02 UTC | 19 Jul 24 05:05 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-270078                                                                | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 05:05 UTC |                     |
	| start   | -p multinode-270078-m02                                                                 | multinode-270078-m02 | jenkins | v1.33.1 | 19 Jul 24 05:05 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-270078-m03                                                                 | multinode-270078-m03 | jenkins | v1.33.1 | 19 Jul 24 05:05 UTC | 19 Jul 24 05:06 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-270078                                                                 | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 05:06 UTC |                     |
	| delete  | -p multinode-270078-m03                                                                 | multinode-270078-m03 | jenkins | v1.33.1 | 19 Jul 24 05:06 UTC | 19 Jul 24 05:06 UTC |
	| delete  | -p multinode-270078                                                                     | multinode-270078     | jenkins | v1.33.1 | 19 Jul 24 05:06 UTC | 19 Jul 24 05:06 UTC |
	| start   | -p test-preload-332657                                                                  | test-preload-332657  | jenkins | v1.33.1 | 19 Jul 24 05:06 UTC | 19 Jul 24 05:08 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-332657 image pull                                                          | test-preload-332657  | jenkins | v1.33.1 | 19 Jul 24 05:08 UTC | 19 Jul 24 05:08 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-332657                                                                  | test-preload-332657  | jenkins | v1.33.1 | 19 Jul 24 05:08 UTC | 19 Jul 24 05:08 UTC |
	| start   | -p test-preload-332657                                                                  | test-preload-332657  | jenkins | v1.33.1 | 19 Jul 24 05:08 UTC | 19 Jul 24 05:09 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-332657 image list                                                          | test-preload-332657  | jenkins | v1.33.1 | 19 Jul 24 05:09 UTC | 19 Jul 24 05:09 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 05:08:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 05:08:09.902658  167777 out.go:291] Setting OutFile to fd 1 ...
	I0719 05:08:09.902793  167777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:08:09.902803  167777 out.go:304] Setting ErrFile to fd 2...
	I0719 05:08:09.902807  167777 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:08:09.903024  167777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 05:08:09.903568  167777 out.go:298] Setting JSON to false
	I0719 05:08:09.904458  167777 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10233,"bootTime":1721355457,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 05:08:09.904517  167777 start.go:139] virtualization: kvm guest
	I0719 05:08:09.906620  167777 out.go:177] * [test-preload-332657] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 05:08:09.908043  167777 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:08:09.908093  167777 notify.go:220] Checking for updates...
	I0719 05:08:09.910398  167777 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:08:09.911628  167777 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 05:08:09.912925  167777 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 05:08:09.914134  167777 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 05:08:09.915318  167777 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:08:09.916825  167777 config.go:182] Loaded profile config "test-preload-332657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0719 05:08:09.917225  167777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:08:09.917264  167777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:08:09.931838  167777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41291
	I0719 05:08:09.932260  167777 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:08:09.932811  167777 main.go:141] libmachine: Using API Version  1
	I0719 05:08:09.932840  167777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:08:09.933255  167777 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:08:09.933454  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:08:09.935217  167777 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0719 05:08:09.936319  167777 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:08:09.936609  167777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:08:09.936643  167777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:08:09.951078  167777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40069
	I0719 05:08:09.951572  167777 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:08:09.952048  167777 main.go:141] libmachine: Using API Version  1
	I0719 05:08:09.952079  167777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:08:09.952372  167777 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:08:09.952547  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:08:09.988296  167777 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 05:08:09.989537  167777 start.go:297] selected driver: kvm2
	I0719 05:08:09.989559  167777 start.go:901] validating driver "kvm2" against &{Name:test-preload-332657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-332657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:08:09.989666  167777 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:08:09.990363  167777 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:08:09.990438  167777 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 05:08:10.005446  167777 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 05:08:10.005754  167777 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:08:10.005813  167777 cni.go:84] Creating CNI manager for ""
	I0719 05:08:10.005825  167777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 05:08:10.005880  167777 start.go:340] cluster config:
	{Name:test-preload-332657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-332657 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:08:10.005971  167777 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:08:10.007703  167777 out.go:177] * Starting "test-preload-332657" primary control-plane node in "test-preload-332657" cluster
	I0719 05:08:10.008845  167777 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0719 05:08:10.391137  167777 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0719 05:08:10.391180  167777 cache.go:56] Caching tarball of preloaded images
	I0719 05:08:10.391383  167777 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0719 05:08:10.393163  167777 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0719 05:08:10.394258  167777 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0719 05:08:10.505345  167777 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0719 05:08:21.738841  167777 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0719 05:08:21.738945  167777 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0719 05:08:22.709569  167777 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0719 05:08:22.709693  167777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/config.json ...
	I0719 05:08:22.709915  167777 start.go:360] acquireMachinesLock for test-preload-332657: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 05:08:22.709973  167777 start.go:364] duration metric: took 38.768µs to acquireMachinesLock for "test-preload-332657"
	I0719 05:08:22.709988  167777 start.go:96] Skipping create...Using existing machine configuration
	I0719 05:08:22.709993  167777 fix.go:54] fixHost starting: 
	I0719 05:08:22.710300  167777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:08:22.710329  167777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:08:22.724913  167777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
	I0719 05:08:22.725384  167777 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:08:22.725830  167777 main.go:141] libmachine: Using API Version  1
	I0719 05:08:22.725852  167777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:08:22.726145  167777 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:08:22.726315  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:08:22.726480  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetState
	I0719 05:08:22.727988  167777 fix.go:112] recreateIfNeeded on test-preload-332657: state=Stopped err=<nil>
	I0719 05:08:22.728011  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	W0719 05:08:22.728167  167777 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 05:08:22.730264  167777 out.go:177] * Restarting existing kvm2 VM for "test-preload-332657" ...
	I0719 05:08:22.731507  167777 main.go:141] libmachine: (test-preload-332657) Calling .Start
	I0719 05:08:22.731663  167777 main.go:141] libmachine: (test-preload-332657) Ensuring networks are active...
	I0719 05:08:22.732361  167777 main.go:141] libmachine: (test-preload-332657) Ensuring network default is active
	I0719 05:08:22.732708  167777 main.go:141] libmachine: (test-preload-332657) Ensuring network mk-test-preload-332657 is active
	I0719 05:08:22.733098  167777 main.go:141] libmachine: (test-preload-332657) Getting domain xml...
	I0719 05:08:22.733898  167777 main.go:141] libmachine: (test-preload-332657) Creating domain...
	I0719 05:08:23.909592  167777 main.go:141] libmachine: (test-preload-332657) Waiting to get IP...
	I0719 05:08:23.910443  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:23.910805  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:23.910864  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:23.910786  167862 retry.go:31] will retry after 190.570748ms: waiting for machine to come up
	I0719 05:08:24.103215  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:24.103685  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:24.103713  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:24.103628  167862 retry.go:31] will retry after 245.820704ms: waiting for machine to come up
	I0719 05:08:24.351184  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:24.351626  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:24.351649  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:24.351568  167862 retry.go:31] will retry after 448.468938ms: waiting for machine to come up
	I0719 05:08:24.801100  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:24.801573  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:24.801604  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:24.801521  167862 retry.go:31] will retry after 583.841866ms: waiting for machine to come up
	I0719 05:08:25.387402  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:25.387736  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:25.387758  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:25.387708  167862 retry.go:31] will retry after 581.162023ms: waiting for machine to come up
	I0719 05:08:25.970028  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:25.970396  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:25.970425  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:25.970345  167862 retry.go:31] will retry after 773.781929ms: waiting for machine to come up
	I0719 05:08:26.745245  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:26.745623  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:26.745653  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:26.745562  167862 retry.go:31] will retry after 1.134887483s: waiting for machine to come up
	I0719 05:08:27.882284  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:27.882752  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:27.882784  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:27.882702  167862 retry.go:31] will retry after 1.032357307s: waiting for machine to come up
	I0719 05:08:28.916241  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:28.916676  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:28.916701  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:28.916616  167862 retry.go:31] will retry after 1.135174678s: waiting for machine to come up
	I0719 05:08:30.052953  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:30.053473  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:30.053501  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:30.053425  167862 retry.go:31] will retry after 1.958875271s: waiting for machine to come up
	I0719 05:08:32.014521  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:32.014888  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:32.014912  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:32.014832  167862 retry.go:31] will retry after 2.488249374s: waiting for machine to come up
	I0719 05:08:34.505859  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:34.506245  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:34.506273  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:34.506193  167862 retry.go:31] will retry after 2.641033774s: waiting for machine to come up
	I0719 05:08:37.149971  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:37.150509  167777 main.go:141] libmachine: (test-preload-332657) DBG | unable to find current IP address of domain test-preload-332657 in network mk-test-preload-332657
	I0719 05:08:37.150538  167777 main.go:141] libmachine: (test-preload-332657) DBG | I0719 05:08:37.150457  167862 retry.go:31] will retry after 3.322775916s: waiting for machine to come up
	I0719 05:08:40.475147  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.475634  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has current primary IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.475656  167777 main.go:141] libmachine: (test-preload-332657) Found IP for machine: 192.168.39.207
	I0719 05:08:40.475667  167777 main.go:141] libmachine: (test-preload-332657) Reserving static IP address...
	I0719 05:08:40.476129  167777 main.go:141] libmachine: (test-preload-332657) Reserved static IP address: 192.168.39.207
	I0719 05:08:40.476175  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "test-preload-332657", mac: "52:54:00:06:22:1f", ip: "192.168.39.207"} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:40.476191  167777 main.go:141] libmachine: (test-preload-332657) Waiting for SSH to be available...
	I0719 05:08:40.476221  167777 main.go:141] libmachine: (test-preload-332657) DBG | skip adding static IP to network mk-test-preload-332657 - found existing host DHCP lease matching {name: "test-preload-332657", mac: "52:54:00:06:22:1f", ip: "192.168.39.207"}
	I0719 05:08:40.476235  167777 main.go:141] libmachine: (test-preload-332657) DBG | Getting to WaitForSSH function...
	I0719 05:08:40.478296  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.478593  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:40.478629  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.478723  167777 main.go:141] libmachine: (test-preload-332657) DBG | Using SSH client type: external
	I0719 05:08:40.478751  167777 main.go:141] libmachine: (test-preload-332657) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/test-preload-332657/id_rsa (-rw-------)
	I0719 05:08:40.478775  167777 main.go:141] libmachine: (test-preload-332657) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.207 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/test-preload-332657/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 05:08:40.478788  167777 main.go:141] libmachine: (test-preload-332657) DBG | About to run SSH command:
	I0719 05:08:40.478796  167777 main.go:141] libmachine: (test-preload-332657) DBG | exit 0
	I0719 05:08:40.600801  167777 main.go:141] libmachine: (test-preload-332657) DBG | SSH cmd err, output: <nil>: 
	I0719 05:08:40.601154  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetConfigRaw
	I0719 05:08:40.601747  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetIP
	I0719 05:08:40.604168  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.604517  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:40.604541  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.604737  167777 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/config.json ...
	I0719 05:08:40.604921  167777 machine.go:94] provisionDockerMachine start ...
	I0719 05:08:40.604939  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:08:40.605158  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:40.607483  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.607806  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:40.607827  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.607986  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:08:40.608176  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:40.608346  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:40.608476  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:08:40.608653  167777 main.go:141] libmachine: Using SSH client type: native
	I0719 05:08:40.608853  167777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0719 05:08:40.608864  167777 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:08:40.709006  167777 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 05:08:40.709032  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetMachineName
	I0719 05:08:40.709300  167777 buildroot.go:166] provisioning hostname "test-preload-332657"
	I0719 05:08:40.709331  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetMachineName
	I0719 05:08:40.709548  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:40.711933  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.712331  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:40.712357  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.712534  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:08:40.712707  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:40.712917  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:40.713107  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:08:40.713297  167777 main.go:141] libmachine: Using SSH client type: native
	I0719 05:08:40.713513  167777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0719 05:08:40.713530  167777 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-332657 && echo "test-preload-332657" | sudo tee /etc/hostname
	I0719 05:08:40.826413  167777 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-332657
	
	I0719 05:08:40.826441  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:40.829489  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.829816  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:40.829838  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.830054  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:08:40.830234  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:40.830417  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:40.830570  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:08:40.830765  167777 main.go:141] libmachine: Using SSH client type: native
	I0719 05:08:40.830976  167777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0719 05:08:40.831001  167777 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-332657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-332657/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-332657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:08:40.941519  167777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:08:40.941556  167777 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 05:08:40.941614  167777 buildroot.go:174] setting up certificates
	I0719 05:08:40.941628  167777 provision.go:84] configureAuth start
	I0719 05:08:40.941644  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetMachineName
	I0719 05:08:40.941968  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetIP
	I0719 05:08:40.944602  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.944940  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:40.944978  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.945185  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:40.947308  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.947611  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:40.947637  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:40.947774  167777 provision.go:143] copyHostCerts
	I0719 05:08:40.947838  167777 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 05:08:40.947849  167777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 05:08:40.947913  167777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 05:08:40.948010  167777 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 05:08:40.948019  167777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 05:08:40.948046  167777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 05:08:40.948098  167777 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 05:08:40.948104  167777 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 05:08:40.948124  167777 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 05:08:40.948170  167777 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.test-preload-332657 san=[127.0.0.1 192.168.39.207 localhost minikube test-preload-332657]
	I0719 05:08:41.152128  167777 provision.go:177] copyRemoteCerts
	I0719 05:08:41.152187  167777 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:08:41.152223  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:41.155069  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.155406  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:41.155439  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.155607  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:08:41.155793  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:41.156003  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:08:41.156131  167777 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/test-preload-332657/id_rsa Username:docker}
	I0719 05:08:41.234842  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0719 05:08:41.257024  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 05:08:41.279168  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 05:08:41.300852  167777 provision.go:87] duration metric: took 359.206559ms to configureAuth
	I0719 05:08:41.300882  167777 buildroot.go:189] setting minikube options for container-runtime
	I0719 05:08:41.301093  167777 config.go:182] Loaded profile config "test-preload-332657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0719 05:08:41.301173  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:41.304181  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.304548  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:41.304572  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.304753  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:08:41.304958  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:41.305203  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:41.305377  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:08:41.305521  167777 main.go:141] libmachine: Using SSH client type: native
	I0719 05:08:41.305680  167777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0719 05:08:41.305695  167777 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 05:08:41.558785  167777 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 05:08:41.558820  167777 machine.go:97] duration metric: took 953.885496ms to provisionDockerMachine
	I0719 05:08:41.558834  167777 start.go:293] postStartSetup for "test-preload-332657" (driver="kvm2")
	I0719 05:08:41.558847  167777 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:08:41.558866  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:08:41.559209  167777 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:08:41.559263  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:41.561787  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.562080  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:41.562099  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.562303  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:08:41.562495  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:41.562671  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:08:41.562831  167777 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/test-preload-332657/id_rsa Username:docker}
	I0719 05:08:41.643737  167777 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:08:41.647399  167777 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 05:08:41.647425  167777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 05:08:41.647503  167777 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 05:08:41.647600  167777 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 05:08:41.647691  167777 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:08:41.656351  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 05:08:41.677081  167777 start.go:296] duration metric: took 118.231729ms for postStartSetup
	I0719 05:08:41.677116  167777 fix.go:56] duration metric: took 18.967122919s for fixHost
	I0719 05:08:41.677140  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:41.679959  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.680302  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:41.680342  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.680530  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:08:41.680723  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:41.680891  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:41.681092  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:08:41.681264  167777 main.go:141] libmachine: Using SSH client type: native
	I0719 05:08:41.681465  167777 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.207 22 <nil> <nil>}
	I0719 05:08:41.681480  167777 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 05:08:41.782027  167777 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721365721.759059295
	
	I0719 05:08:41.782053  167777 fix.go:216] guest clock: 1721365721.759059295
	I0719 05:08:41.782061  167777 fix.go:229] Guest: 2024-07-19 05:08:41.759059295 +0000 UTC Remote: 2024-07-19 05:08:41.67711991 +0000 UTC m=+31.808578045 (delta=81.939385ms)
	I0719 05:08:41.782085  167777 fix.go:200] guest clock delta is within tolerance: 81.939385ms
	I0719 05:08:41.782092  167777 start.go:83] releasing machines lock for "test-preload-332657", held for 19.072108749s
	I0719 05:08:41.782118  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:08:41.782413  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetIP
	I0719 05:08:41.785163  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.785674  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:41.785706  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.785883  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:08:41.786513  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:08:41.786766  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:08:41.786909  167777 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 05:08:41.786981  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:41.787157  167777 ssh_runner.go:195] Run: cat /version.json
	I0719 05:08:41.787178  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:08:41.789955  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.790346  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:41.790383  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.790407  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.790540  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:08:41.790687  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:41.790816  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:08:41.790834  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:41.790857  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:41.790981  167777 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/test-preload-332657/id_rsa Username:docker}
	I0719 05:08:41.791081  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:08:41.791221  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:08:41.791387  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:08:41.791554  167777 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/test-preload-332657/id_rsa Username:docker}
	I0719 05:08:41.903081  167777 ssh_runner.go:195] Run: systemctl --version
	I0719 05:08:41.908693  167777 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 05:08:42.047724  167777 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 05:08:42.053503  167777 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:08:42.053567  167777 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:08:42.071090  167777 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:08:42.071123  167777 start.go:495] detecting cgroup driver to use...
	I0719 05:08:42.071188  167777 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:08:42.087862  167777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:08:42.101452  167777 docker.go:217] disabling cri-docker service (if available) ...
	I0719 05:08:42.101508  167777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 05:08:42.114741  167777 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 05:08:42.128340  167777 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 05:08:42.246611  167777 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 05:08:42.384720  167777 docker.go:233] disabling docker service ...
	I0719 05:08:42.384801  167777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 05:08:42.398088  167777 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 05:08:42.410393  167777 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 05:08:42.522549  167777 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 05:08:42.635302  167777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 05:08:42.648410  167777 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:08:42.665040  167777 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0719 05:08:42.665139  167777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:08:42.674514  167777 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 05:08:42.674591  167777 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:08:42.684088  167777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:08:42.693683  167777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:08:42.703495  167777 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:08:42.713184  167777 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:08:42.722516  167777 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:08:42.737770  167777 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:08:42.746912  167777 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:08:42.755367  167777 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 05:08:42.755422  167777 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 05:08:42.767758  167777 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:08:42.776297  167777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:08:42.884310  167777 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 05:08:43.003931  167777 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 05:08:43.004027  167777 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 05:08:43.008625  167777 start.go:563] Will wait 60s for crictl version
	I0719 05:08:43.008687  167777 ssh_runner.go:195] Run: which crictl
	I0719 05:08:43.012038  167777 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:08:43.048390  167777 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 05:08:43.048469  167777 ssh_runner.go:195] Run: crio --version
	I0719 05:08:43.074706  167777 ssh_runner.go:195] Run: crio --version
	I0719 05:08:43.102097  167777 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0719 05:08:43.103386  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetIP
	I0719 05:08:43.106451  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:43.106821  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:08:43.106853  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:08:43.107065  167777 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0719 05:08:43.116824  167777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:08:43.128803  167777 kubeadm.go:883] updating cluster {Name:test-preload-332657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-332657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 05:08:43.128914  167777 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0719 05:08:43.128954  167777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 05:08:43.162632  167777 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0719 05:08:43.162712  167777 ssh_runner.go:195] Run: which lz4
	I0719 05:08:43.166258  167777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 05:08:43.169925  167777 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 05:08:43.169959  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0719 05:08:44.531133  167777 crio.go:462] duration metric: took 1.364898028s to copy over tarball
	I0719 05:08:44.531205  167777 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 05:08:46.798320  167777 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.267079375s)
	I0719 05:08:46.798353  167777 crio.go:469] duration metric: took 2.267191586s to extract the tarball
	I0719 05:08:46.798361  167777 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 05:08:46.837811  167777 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 05:08:46.876739  167777 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0719 05:08:46.876765  167777 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 05:08:46.876838  167777 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:08:46.876861  167777 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 05:08:46.876883  167777 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 05:08:46.876909  167777 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 05:08:46.876933  167777 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 05:08:46.876987  167777 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 05:08:46.876865  167777 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 05:08:46.876876  167777 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 05:08:46.878434  167777 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 05:08:46.878446  167777 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 05:08:46.878454  167777 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 05:08:46.878463  167777 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:08:46.878434  167777 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 05:08:46.878446  167777 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 05:08:46.878449  167777 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 05:08:46.878449  167777 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 05:08:47.100435  167777 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0719 05:08:47.110029  167777 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 05:08:47.112048  167777 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0719 05:08:47.112617  167777 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0719 05:08:47.115111  167777 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0719 05:08:47.123546  167777 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0719 05:08:47.129944  167777 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0719 05:08:47.161712  167777 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0719 05:08:47.161753  167777 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 05:08:47.161806  167777 ssh_runner.go:195] Run: which crictl
	I0719 05:08:47.251657  167777 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0719 05:08:47.251705  167777 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 05:08:47.251760  167777 ssh_runner.go:195] Run: which crictl
	I0719 05:08:47.251823  167777 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0719 05:08:47.251865  167777 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 05:08:47.251918  167777 ssh_runner.go:195] Run: which crictl
	I0719 05:08:47.257722  167777 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0719 05:08:47.257767  167777 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 05:08:47.257771  167777 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0719 05:08:47.257810  167777 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0719 05:08:47.257814  167777 ssh_runner.go:195] Run: which crictl
	I0719 05:08:47.257851  167777 ssh_runner.go:195] Run: which crictl
	I0719 05:08:47.272301  167777 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0719 05:08:47.272337  167777 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0719 05:08:47.272350  167777 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0719 05:08:47.272377  167777 ssh_runner.go:195] Run: which crictl
	I0719 05:08:47.272382  167777 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 05:08:47.272409  167777 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0719 05:08:47.272418  167777 ssh_runner.go:195] Run: which crictl
	I0719 05:08:47.272453  167777 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0719 05:08:47.272473  167777 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0719 05:08:47.272456  167777 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 05:08:47.272499  167777 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0719 05:08:47.359730  167777 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0719 05:08:47.359856  167777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0719 05:08:47.372136  167777 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0719 05:08:47.372186  167777 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 05:08:47.372220  167777 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0719 05:08:47.372238  167777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0719 05:08:47.372249  167777 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0719 05:08:47.372266  167777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0719 05:08:47.372294  167777 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0719 05:08:47.372321  167777 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0719 05:08:47.372335  167777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0719 05:08:47.372351  167777 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0719 05:08:47.372361  167777 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0719 05:08:47.372381  167777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0719 05:08:47.372390  167777 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0719 05:08:47.386237  167777 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0719 05:08:47.734242  167777 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:08:50.255597  167777 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.883184626s)
	I0719 05:08:50.255632  167777 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0719 05:08:50.255653  167777 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0719 05:08:50.255653  167777 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4: (2.883308731s)
	I0719 05:08:50.255699  167777 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0719 05:08:50.255702  167777 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.883349407s)
	I0719 05:08:50.255711  167777 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0719 05:08:50.255727  167777 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0719 05:08:50.255743  167777 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.883345497s)
	I0719 05:08:50.255765  167777 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0719 05:08:50.255788  167777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0719 05:08:50.255805  167777 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0: (2.883557701s)
	I0719 05:08:50.255837  167777 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.883582447s)
	I0719 05:08:50.255848  167777 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0719 05:08:50.255858  167777 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0719 05:08:50.255877  167777 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.521611991s)
	I0719 05:08:50.255934  167777 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0719 05:08:50.602868  167777 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0719 05:08:50.602910  167777 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0719 05:08:50.602932  167777 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0719 05:08:50.602963  167777 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0719 05:08:50.602965  167777 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0719 05:08:51.244632  167777 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0719 05:08:51.244680  167777 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0719 05:08:51.244730  167777 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0719 05:08:51.386070  167777 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0719 05:08:51.386126  167777 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0719 05:08:51.386191  167777 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0719 05:08:52.128550  167777 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0719 05:08:52.128598  167777 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0719 05:08:52.128653  167777 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0719 05:08:52.967769  167777 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0719 05:08:52.967822  167777 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0719 05:08:52.967893  167777 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0719 05:08:55.124100  167777 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.156180094s)
	I0719 05:08:55.124126  167777 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0719 05:08:55.124154  167777 cache_images.go:123] Successfully loaded all cached images
	I0719 05:08:55.124159  167777 cache_images.go:92] duration metric: took 8.247382826s to LoadCachedImages
	I0719 05:08:55.124167  167777 kubeadm.go:934] updating node { 192.168.39.207 8443 v1.24.4 crio true true} ...
	I0719 05:08:55.124272  167777 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-332657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.207
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-332657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:08:55.124344  167777 ssh_runner.go:195] Run: crio config
	I0719 05:08:55.171536  167777 cni.go:84] Creating CNI manager for ""
	I0719 05:08:55.171559  167777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 05:08:55.171569  167777 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 05:08:55.171607  167777 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.207 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-332657 NodeName:test-preload-332657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.207"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.207 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 05:08:55.171785  167777 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.207
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-332657"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.207
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.207"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 05:08:55.171863  167777 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0719 05:08:55.181520  167777 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 05:08:55.181603  167777 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 05:08:55.190269  167777 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0719 05:08:55.205475  167777 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 05:08:55.220026  167777 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0719 05:08:55.235995  167777 ssh_runner.go:195] Run: grep 192.168.39.207	control-plane.minikube.internal$ /etc/hosts
	I0719 05:08:55.239547  167777 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.207	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:08:55.250831  167777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:08:55.370957  167777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:08:55.387185  167777 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657 for IP: 192.168.39.207
	I0719 05:08:55.387213  167777 certs.go:194] generating shared ca certs ...
	I0719 05:08:55.387236  167777 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:08:55.387423  167777 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 05:08:55.387483  167777 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 05:08:55.387497  167777 certs.go:256] generating profile certs ...
	I0719 05:08:55.387606  167777 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/client.key
	I0719 05:08:55.387697  167777 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/apiserver.key.892cf9cc
	I0719 05:08:55.387756  167777 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/proxy-client.key
	I0719 05:08:55.387924  167777 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 05:08:55.387970  167777 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 05:08:55.387993  167777 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 05:08:55.388036  167777 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 05:08:55.388072  167777 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 05:08:55.388112  167777 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 05:08:55.388166  167777 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 05:08:55.389117  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:08:55.427858  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 05:08:55.450619  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:08:55.478397  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:08:55.504101  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0719 05:08:55.530994  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 05:08:55.572848  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 05:08:55.595980  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 05:08:55.617121  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 05:08:55.638002  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:08:55.659365  167777 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 05:08:55.680931  167777 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 05:08:55.696812  167777 ssh_runner.go:195] Run: openssl version
	I0719 05:08:55.702512  167777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 05:08:55.712031  167777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 05:08:55.716017  167777 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 05:08:55.716074  167777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 05:08:55.721503  167777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 05:08:55.730891  167777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 05:08:55.740502  167777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 05:08:55.744537  167777 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 05:08:55.744610  167777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 05:08:55.749723  167777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:08:55.759344  167777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:08:55.769149  167777 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:08:55.773238  167777 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:08:55.773289  167777 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:08:55.778547  167777 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:08:55.788198  167777 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:08:55.792536  167777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 05:08:55.798451  167777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 05:08:55.803921  167777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 05:08:55.809514  167777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 05:08:55.814857  167777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 05:08:55.820246  167777 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 05:08:55.825870  167777 kubeadm.go:392] StartCluster: {Name:test-preload-332657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-332657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:08:55.826009  167777 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 05:08:55.826096  167777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 05:08:55.861057  167777 cri.go:89] found id: ""
	I0719 05:08:55.861139  167777 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 05:08:55.870866  167777 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 05:08:55.870884  167777 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 05:08:55.870926  167777 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 05:08:55.880184  167777 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 05:08:55.880637  167777 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-332657" does not appear in /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 05:08:55.880765  167777 kubeconfig.go:62] /home/jenkins/minikube-integration/19302-122995/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-332657" cluster setting kubeconfig missing "test-preload-332657" context setting]
	I0719 05:08:55.881034  167777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/kubeconfig: {Name:mk6e4a1b81f147a5c312ddde5acb372811581248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:08:55.881709  167777 kapi.go:59] client config for test-preload-332657: &rest.Config{Host:"https://192.168.39.207:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/client.key", CAFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 05:08:55.882325  167777 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 05:08:55.891311  167777 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.207
	I0719 05:08:55.891344  167777 kubeadm.go:1160] stopping kube-system containers ...
	I0719 05:08:55.891356  167777 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0719 05:08:55.891398  167777 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 05:08:55.930482  167777 cri.go:89] found id: ""
	I0719 05:08:55.930565  167777 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 05:08:55.946471  167777 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 05:08:55.956229  167777 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:08:55.956248  167777 kubeadm.go:157] found existing configuration files:
	
	I0719 05:08:55.956327  167777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 05:08:55.964747  167777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:08:55.964817  167777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 05:08:55.973759  167777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 05:08:55.982356  167777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:08:55.982442  167777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 05:08:55.991890  167777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 05:08:56.000664  167777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:08:56.000735  167777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 05:08:56.009746  167777 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 05:08:56.018256  167777 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:08:56.018320  167777 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 05:08:56.027086  167777 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 05:08:56.036986  167777 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:08:56.127130  167777 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:08:56.958702  167777 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:08:57.214613  167777 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:08:57.275775  167777 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:08:57.341444  167777 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:08:57.341537  167777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:08:57.842605  167777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:08:58.342132  167777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:08:58.365288  167777 api_server.go:72] duration metric: took 1.023843683s to wait for apiserver process to appear ...
	I0719 05:08:58.365319  167777 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:08:58.365341  167777 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0719 05:08:58.365863  167777 api_server.go:269] stopped: https://192.168.39.207:8443/healthz: Get "https://192.168.39.207:8443/healthz": dial tcp 192.168.39.207:8443: connect: connection refused
	I0719 05:08:58.865427  167777 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0719 05:09:02.182995  167777 api_server.go:279] https://192.168.39.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 05:09:02.183025  167777 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 05:09:02.183040  167777 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0719 05:09:02.199556  167777 api_server.go:279] https://192.168.39.207:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 05:09:02.199592  167777 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 05:09:02.365948  167777 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0719 05:09:02.372316  167777 api_server.go:279] https://192.168.39.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:09:02.372351  167777 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:09:02.866456  167777 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0719 05:09:02.872516  167777 api_server.go:279] https://192.168.39.207:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:09:02.872545  167777 api_server.go:103] status: https://192.168.39.207:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:09:03.366192  167777 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0719 05:09:03.372828  167777 api_server.go:279] https://192.168.39.207:8443/healthz returned 200:
	ok
	I0719 05:09:03.382764  167777 api_server.go:141] control plane version: v1.24.4
	I0719 05:09:03.382794  167777 api_server.go:131] duration metric: took 5.017467292s to wait for apiserver health ...
	I0719 05:09:03.382807  167777 cni.go:84] Creating CNI manager for ""
	I0719 05:09:03.382816  167777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 05:09:03.384505  167777 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 05:09:03.385703  167777 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 05:09:03.401848  167777 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 05:09:03.438654  167777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:09:03.451646  167777 system_pods.go:59] 8 kube-system pods found
	I0719 05:09:03.451683  167777 system_pods.go:61] "coredns-6d4b75cb6d-b7vbs" [d66b5dfa-29e7-45e9-b6b4-cfb79ac6a42c] Running
	I0719 05:09:03.451689  167777 system_pods.go:61] "coredns-6d4b75cb6d-trt5k" [c5467929-b2d5-447e-bb33-d477400b09b4] Running
	I0719 05:09:03.451706  167777 system_pods.go:61] "etcd-test-preload-332657" [6d6a9494-b997-4046-b687-5cf7eec783ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 05:09:03.451714  167777 system_pods.go:61] "kube-apiserver-test-preload-332657" [e6b9096c-1cf7-42ea-8d04-589a6f417a23] Running
	I0719 05:09:03.451724  167777 system_pods.go:61] "kube-controller-manager-test-preload-332657" [be644f55-831f-4e71-a348-9e11c6cc323c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 05:09:03.451729  167777 system_pods.go:61] "kube-proxy-zncv9" [2c7ef787-9dc8-4f6f-b729-a84aa1886c16] Running
	I0719 05:09:03.451734  167777 system_pods.go:61] "kube-scheduler-test-preload-332657" [47216010-2e3a-4303-849a-782938bcf128] Running
	I0719 05:09:03.451738  167777 system_pods.go:61] "storage-provisioner" [454dd667-bd69-4a62-9646-9174a0e0ba9c] Running
	I0719 05:09:03.451746  167777 system_pods.go:74] duration metric: took 13.070257ms to wait for pod list to return data ...
	I0719 05:09:03.451757  167777 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:09:03.455712  167777 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:09:03.455750  167777 node_conditions.go:123] node cpu capacity is 2
	I0719 05:09:03.455765  167777 node_conditions.go:105] duration metric: took 3.99857ms to run NodePressure ...
	I0719 05:09:03.455792  167777 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:09:03.694946  167777 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 05:09:03.698640  167777 kubeadm.go:739] kubelet initialised
	I0719 05:09:03.698666  167777 kubeadm.go:740] duration metric: took 3.687045ms waiting for restarted kubelet to initialise ...
	I0719 05:09:03.698677  167777 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:09:03.705589  167777 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-b7vbs" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:03.711099  167777 pod_ready.go:97] node "test-preload-332657" hosting pod "coredns-6d4b75cb6d-b7vbs" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:03.711129  167777 pod_ready.go:81] duration metric: took 5.513553ms for pod "coredns-6d4b75cb6d-b7vbs" in "kube-system" namespace to be "Ready" ...
	E0719 05:09:03.711141  167777 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-332657" hosting pod "coredns-6d4b75cb6d-b7vbs" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:03.711149  167777 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-trt5k" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:03.716830  167777 pod_ready.go:97] node "test-preload-332657" hosting pod "coredns-6d4b75cb6d-trt5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:03.716856  167777 pod_ready.go:81] duration metric: took 5.692224ms for pod "coredns-6d4b75cb6d-trt5k" in "kube-system" namespace to be "Ready" ...
	E0719 05:09:03.716869  167777 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-332657" hosting pod "coredns-6d4b75cb6d-trt5k" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:03.716877  167777 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:03.721676  167777 pod_ready.go:97] node "test-preload-332657" hosting pod "etcd-test-preload-332657" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:03.721701  167777 pod_ready.go:81] duration metric: took 4.810871ms for pod "etcd-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	E0719 05:09:03.721710  167777 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-332657" hosting pod "etcd-test-preload-332657" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:03.721715  167777 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:03.844081  167777 pod_ready.go:97] node "test-preload-332657" hosting pod "kube-apiserver-test-preload-332657" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:03.844123  167777 pod_ready.go:81] duration metric: took 122.397218ms for pod "kube-apiserver-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	E0719 05:09:03.844137  167777 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-332657" hosting pod "kube-apiserver-test-preload-332657" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:03.844145  167777 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:04.242290  167777 pod_ready.go:97] node "test-preload-332657" hosting pod "kube-controller-manager-test-preload-332657" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:04.242320  167777 pod_ready.go:81] duration metric: took 398.165644ms for pod "kube-controller-manager-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	E0719 05:09:04.242330  167777 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-332657" hosting pod "kube-controller-manager-test-preload-332657" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:04.242342  167777 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zncv9" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:04.641433  167777 pod_ready.go:97] node "test-preload-332657" hosting pod "kube-proxy-zncv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:04.641459  167777 pod_ready.go:81] duration metric: took 399.109212ms for pod "kube-proxy-zncv9" in "kube-system" namespace to be "Ready" ...
	E0719 05:09:04.641469  167777 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-332657" hosting pod "kube-proxy-zncv9" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:04.641475  167777 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:05.042208  167777 pod_ready.go:97] node "test-preload-332657" hosting pod "kube-scheduler-test-preload-332657" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:05.042235  167777 pod_ready.go:81] duration metric: took 400.75412ms for pod "kube-scheduler-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	E0719 05:09:05.042244  167777 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-332657" hosting pod "kube-scheduler-test-preload-332657" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:05.042251  167777 pod_ready.go:38] duration metric: took 1.343564654s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:09:05.042269  167777 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 05:09:05.054248  167777 ops.go:34] apiserver oom_adj: -16
	I0719 05:09:05.054274  167777 kubeadm.go:597] duration metric: took 9.183384176s to restartPrimaryControlPlane
	I0719 05:09:05.054284  167777 kubeadm.go:394] duration metric: took 9.228423452s to StartCluster
	I0719 05:09:05.054301  167777 settings.go:142] acquiring lock: {Name:mka29304fbead54bd9b698f9018edea7e59177cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:09:05.054384  167777 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 05:09:05.055259  167777 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/kubeconfig: {Name:mk6e4a1b81f147a5c312ddde5acb372811581248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:09:05.055523  167777 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.207 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 05:09:05.055597  167777 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 05:09:05.055663  167777 addons.go:69] Setting storage-provisioner=true in profile "test-preload-332657"
	I0719 05:09:05.055679  167777 addons.go:69] Setting default-storageclass=true in profile "test-preload-332657"
	I0719 05:09:05.055726  167777 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-332657"
	I0719 05:09:05.055772  167777 config.go:182] Loaded profile config "test-preload-332657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0719 05:09:05.055684  167777 addons.go:234] Setting addon storage-provisioner=true in "test-preload-332657"
	W0719 05:09:05.055808  167777 addons.go:243] addon storage-provisioner should already be in state true
	I0719 05:09:05.055850  167777 host.go:66] Checking if "test-preload-332657" exists ...
	I0719 05:09:05.056114  167777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:09:05.056135  167777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:09:05.056160  167777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:09:05.056264  167777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:09:05.057361  167777 out.go:177] * Verifying Kubernetes components...
	I0719 05:09:05.058846  167777 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:09:05.070911  167777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44081
	I0719 05:09:05.071220  167777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34503
	I0719 05:09:05.071453  167777 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:09:05.071641  167777 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:09:05.072014  167777 main.go:141] libmachine: Using API Version  1
	I0719 05:09:05.072031  167777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:09:05.072155  167777 main.go:141] libmachine: Using API Version  1
	I0719 05:09:05.072179  167777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:09:05.072360  167777 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:09:05.072508  167777 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:09:05.072552  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetState
	I0719 05:09:05.073098  167777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:09:05.073143  167777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:09:05.074802  167777 kapi.go:59] client config for test-preload-332657: &rest.Config{Host:"https://192.168.39.207:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/client.crt", KeyFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/profiles/test-preload-332657/client.key", CAFile:"/home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 05:09:05.075069  167777 addons.go:234] Setting addon default-storageclass=true in "test-preload-332657"
	W0719 05:09:05.075092  167777 addons.go:243] addon default-storageclass should already be in state true
	I0719 05:09:05.075131  167777 host.go:66] Checking if "test-preload-332657" exists ...
	I0719 05:09:05.075466  167777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:09:05.075507  167777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:09:05.088070  167777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33205
	I0719 05:09:05.088554  167777 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:09:05.089097  167777 main.go:141] libmachine: Using API Version  1
	I0719 05:09:05.089124  167777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:09:05.089443  167777 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:09:05.089622  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetState
	I0719 05:09:05.090036  167777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I0719 05:09:05.090497  167777 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:09:05.091020  167777 main.go:141] libmachine: Using API Version  1
	I0719 05:09:05.091046  167777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:09:05.091336  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:09:05.091380  167777 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:09:05.091881  167777 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:09:05.091916  167777 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:09:05.093420  167777 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:09:05.094862  167777 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:09:05.094881  167777 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 05:09:05.094897  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:09:05.097862  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:09:05.098290  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:09:05.098320  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:09:05.098505  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:09:05.098677  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:09:05.098831  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:09:05.098952  167777 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/test-preload-332657/id_rsa Username:docker}
	I0719 05:09:05.108674  167777 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0719 05:09:05.109241  167777 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:09:05.109756  167777 main.go:141] libmachine: Using API Version  1
	I0719 05:09:05.109779  167777 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:09:05.110145  167777 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:09:05.110391  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetState
	I0719 05:09:05.111941  167777 main.go:141] libmachine: (test-preload-332657) Calling .DriverName
	I0719 05:09:05.112152  167777 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 05:09:05.112167  167777 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 05:09:05.112186  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHHostname
	I0719 05:09:05.114804  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:09:05.115319  167777 main.go:141] libmachine: (test-preload-332657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:22:1f", ip: ""} in network mk-test-preload-332657: {Iface:virbr1 ExpiryTime:2024-07-19 06:08:32 +0000 UTC Type:0 Mac:52:54:00:06:22:1f Iaid: IPaddr:192.168.39.207 Prefix:24 Hostname:test-preload-332657 Clientid:01:52:54:00:06:22:1f}
	I0719 05:09:05.115365  167777 main.go:141] libmachine: (test-preload-332657) DBG | domain test-preload-332657 has defined IP address 192.168.39.207 and MAC address 52:54:00:06:22:1f in network mk-test-preload-332657
	I0719 05:09:05.115501  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHPort
	I0719 05:09:05.115681  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHKeyPath
	I0719 05:09:05.115860  167777 main.go:141] libmachine: (test-preload-332657) Calling .GetSSHUsername
	I0719 05:09:05.116016  167777 sshutil.go:53] new ssh client: &{IP:192.168.39.207 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/test-preload-332657/id_rsa Username:docker}
	I0719 05:09:05.224726  167777 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:09:05.244805  167777 node_ready.go:35] waiting up to 6m0s for node "test-preload-332657" to be "Ready" ...
	I0719 05:09:05.298051  167777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:09:05.314616  167777 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 05:09:06.226791  167777 main.go:141] libmachine: Making call to close driver server
	I0719 05:09:06.226816  167777 main.go:141] libmachine: Making call to close driver server
	I0719 05:09:06.226837  167777 main.go:141] libmachine: (test-preload-332657) Calling .Close
	I0719 05:09:06.226823  167777 main.go:141] libmachine: (test-preload-332657) Calling .Close
	I0719 05:09:06.227147  167777 main.go:141] libmachine: Successfully made call to close driver server
	I0719 05:09:06.227166  167777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 05:09:06.227175  167777 main.go:141] libmachine: Making call to close driver server
	I0719 05:09:06.227183  167777 main.go:141] libmachine: (test-preload-332657) Calling .Close
	I0719 05:09:06.227191  167777 main.go:141] libmachine: Successfully made call to close driver server
	I0719 05:09:06.227208  167777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 05:09:06.227215  167777 main.go:141] libmachine: Making call to close driver server
	I0719 05:09:06.227191  167777 main.go:141] libmachine: (test-preload-332657) DBG | Closing plugin on server side
	I0719 05:09:06.227223  167777 main.go:141] libmachine: (test-preload-332657) Calling .Close
	I0719 05:09:06.227411  167777 main.go:141] libmachine: Successfully made call to close driver server
	I0719 05:09:06.227424  167777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 05:09:06.228851  167777 main.go:141] libmachine: Successfully made call to close driver server
	I0719 05:09:06.228866  167777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 05:09:06.239271  167777 main.go:141] libmachine: Making call to close driver server
	I0719 05:09:06.239294  167777 main.go:141] libmachine: (test-preload-332657) Calling .Close
	I0719 05:09:06.239568  167777 main.go:141] libmachine: Successfully made call to close driver server
	I0719 05:09:06.239658  167777 main.go:141] libmachine: Making call to close connection to plugin binary
	I0719 05:09:06.239599  167777 main.go:141] libmachine: (test-preload-332657) DBG | Closing plugin on server side
	I0719 05:09:06.241553  167777 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 05:09:06.242758  167777 addons.go:510] duration metric: took 1.18716824s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 05:09:07.249282  167777 node_ready.go:53] node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:09.748486  167777 node_ready.go:53] node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:11.748723  167777 node_ready.go:53] node "test-preload-332657" has status "Ready":"False"
	I0719 05:09:12.748738  167777 node_ready.go:49] node "test-preload-332657" has status "Ready":"True"
	I0719 05:09:12.748771  167777 node_ready.go:38] duration metric: took 7.503926363s for node "test-preload-332657" to be "Ready" ...
	I0719 05:09:12.748780  167777 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:09:12.753473  167777 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-trt5k" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:12.758529  167777 pod_ready.go:92] pod "coredns-6d4b75cb6d-trt5k" in "kube-system" namespace has status "Ready":"True"
	I0719 05:09:12.758555  167777 pod_ready.go:81] duration metric: took 5.059167ms for pod "coredns-6d4b75cb6d-trt5k" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:12.758566  167777 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:12.762580  167777 pod_ready.go:92] pod "etcd-test-preload-332657" in "kube-system" namespace has status "Ready":"True"
	I0719 05:09:12.762600  167777 pod_ready.go:81] duration metric: took 4.025988ms for pod "etcd-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:12.762609  167777 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:12.766657  167777 pod_ready.go:92] pod "kube-apiserver-test-preload-332657" in "kube-system" namespace has status "Ready":"True"
	I0719 05:09:12.766674  167777 pod_ready.go:81] duration metric: took 4.06013ms for pod "kube-apiserver-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:12.766682  167777 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:12.771684  167777 pod_ready.go:92] pod "kube-controller-manager-test-preload-332657" in "kube-system" namespace has status "Ready":"True"
	I0719 05:09:12.771711  167777 pod_ready.go:81] duration metric: took 5.021445ms for pod "kube-controller-manager-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:12.771728  167777 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zncv9" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:13.149866  167777 pod_ready.go:92] pod "kube-proxy-zncv9" in "kube-system" namespace has status "Ready":"True"
	I0719 05:09:13.149890  167777 pod_ready.go:81] duration metric: took 378.154677ms for pod "kube-proxy-zncv9" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:13.149899  167777 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:13.549481  167777 pod_ready.go:92] pod "kube-scheduler-test-preload-332657" in "kube-system" namespace has status "Ready":"True"
	I0719 05:09:13.549503  167777 pod_ready.go:81] duration metric: took 399.597863ms for pod "kube-scheduler-test-preload-332657" in "kube-system" namespace to be "Ready" ...
	I0719 05:09:13.549519  167777 pod_ready.go:38] duration metric: took 800.723133ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:09:13.549533  167777 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:09:13.549583  167777 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:09:13.562873  167777 api_server.go:72] duration metric: took 8.507312486s to wait for apiserver process to appear ...
	I0719 05:09:13.562899  167777 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:09:13.562925  167777 api_server.go:253] Checking apiserver healthz at https://192.168.39.207:8443/healthz ...
	I0719 05:09:13.567605  167777 api_server.go:279] https://192.168.39.207:8443/healthz returned 200:
	ok
	I0719 05:09:13.568850  167777 api_server.go:141] control plane version: v1.24.4
	I0719 05:09:13.568875  167777 api_server.go:131] duration metric: took 5.969876ms to wait for apiserver health ...
	I0719 05:09:13.568886  167777 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:09:13.752240  167777 system_pods.go:59] 7 kube-system pods found
	I0719 05:09:13.752276  167777 system_pods.go:61] "coredns-6d4b75cb6d-trt5k" [c5467929-b2d5-447e-bb33-d477400b09b4] Running
	I0719 05:09:13.752283  167777 system_pods.go:61] "etcd-test-preload-332657" [6d6a9494-b997-4046-b687-5cf7eec783ee] Running
	I0719 05:09:13.752289  167777 system_pods.go:61] "kube-apiserver-test-preload-332657" [e6b9096c-1cf7-42ea-8d04-589a6f417a23] Running
	I0719 05:09:13.752295  167777 system_pods.go:61] "kube-controller-manager-test-preload-332657" [be644f55-831f-4e71-a348-9e11c6cc323c] Running
	I0719 05:09:13.752300  167777 system_pods.go:61] "kube-proxy-zncv9" [2c7ef787-9dc8-4f6f-b729-a84aa1886c16] Running
	I0719 05:09:13.752310  167777 system_pods.go:61] "kube-scheduler-test-preload-332657" [47216010-2e3a-4303-849a-782938bcf128] Running
	I0719 05:09:13.752317  167777 system_pods.go:61] "storage-provisioner" [454dd667-bd69-4a62-9646-9174a0e0ba9c] Running
	I0719 05:09:13.752325  167777 system_pods.go:74] duration metric: took 183.431046ms to wait for pod list to return data ...
	I0719 05:09:13.752337  167777 default_sa.go:34] waiting for default service account to be created ...
	I0719 05:09:13.948536  167777 default_sa.go:45] found service account: "default"
	I0719 05:09:13.948566  167777 default_sa.go:55] duration metric: took 196.222015ms for default service account to be created ...
	I0719 05:09:13.948578  167777 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 05:09:14.151769  167777 system_pods.go:86] 7 kube-system pods found
	I0719 05:09:14.151796  167777 system_pods.go:89] "coredns-6d4b75cb6d-trt5k" [c5467929-b2d5-447e-bb33-d477400b09b4] Running
	I0719 05:09:14.151805  167777 system_pods.go:89] "etcd-test-preload-332657" [6d6a9494-b997-4046-b687-5cf7eec783ee] Running
	I0719 05:09:14.151810  167777 system_pods.go:89] "kube-apiserver-test-preload-332657" [e6b9096c-1cf7-42ea-8d04-589a6f417a23] Running
	I0719 05:09:14.151814  167777 system_pods.go:89] "kube-controller-manager-test-preload-332657" [be644f55-831f-4e71-a348-9e11c6cc323c] Running
	I0719 05:09:14.151817  167777 system_pods.go:89] "kube-proxy-zncv9" [2c7ef787-9dc8-4f6f-b729-a84aa1886c16] Running
	I0719 05:09:14.151821  167777 system_pods.go:89] "kube-scheduler-test-preload-332657" [47216010-2e3a-4303-849a-782938bcf128] Running
	I0719 05:09:14.151825  167777 system_pods.go:89] "storage-provisioner" [454dd667-bd69-4a62-9646-9174a0e0ba9c] Running
	I0719 05:09:14.151831  167777 system_pods.go:126] duration metric: took 203.247575ms to wait for k8s-apps to be running ...
	I0719 05:09:14.151839  167777 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 05:09:14.151882  167777 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:09:14.165319  167777 system_svc.go:56] duration metric: took 13.470562ms WaitForService to wait for kubelet
	I0719 05:09:14.165350  167777 kubeadm.go:582] duration metric: took 9.109791443s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:09:14.165376  167777 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:09:14.348975  167777 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:09:14.349004  167777 node_conditions.go:123] node cpu capacity is 2
	I0719 05:09:14.349020  167777 node_conditions.go:105] duration metric: took 183.633588ms to run NodePressure ...
	I0719 05:09:14.349031  167777 start.go:241] waiting for startup goroutines ...
	I0719 05:09:14.349038  167777 start.go:246] waiting for cluster config update ...
	I0719 05:09:14.349047  167777 start.go:255] writing updated cluster config ...
	I0719 05:09:14.349346  167777 ssh_runner.go:195] Run: rm -f paused
	I0719 05:09:14.398403  167777 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0719 05:09:14.400282  167777 out.go:177] 
	W0719 05:09:14.401460  167777 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0719 05:09:14.402588  167777 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0719 05:09:14.403747  167777 out.go:177] * Done! kubectl is now configured to use "test-preload-332657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.332725454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ed700320e145e0478c0f2acbe678950f4b8d55fbba6e5577a23ac0abcaf01d9,PodSandboxId:d0861ae577af4ada53849d021c74d48458dc7c628a7a7e22a1e93611d66c3760,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721365750444194433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-trt5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5467929-b2d5-447e-bb33-d477400b09b4,},Annotations:map[string]string{io.kubernetes.container.hash: a54babc2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed599861ca1d8ec941c035b8f07376a17f334581dc1dfa2ef4e809ec9db728bb,PodSandboxId:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365744472513381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{io.kubernetes.container.hash: b2300fb5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51581f359efdcd584c4856ca62c66810f51af2b49c5802f82228515146d4913f,PodSandboxId:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721365743435629391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{io.kubernetes.container.hash: b2300fb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e5cd6f6c78cdee180e3882b0e5e9b9dcf59b9ade6ba70aebba820a31929e02,PodSandboxId:464515d83100dbbb7a0d1e00da152c69507b2fc059fb9be0a25afc8489a10e90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721365743361639050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zncv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7ef787-9dc8-4
f6f-b729-a84aa1886c16,},Annotations:map[string]string{io.kubernetes.container.hash: 1951138e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e4e1035a17ada5f936bceb28c92104c29283004c21ee0f7e05b194458ed6ce,PodSandboxId:c7c401c2e72eb3d963f2dde47b0784ae35ab19e4ba075192bcddebb4ea4efa8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721365738077610058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44c100586dc057272ae853d8b8a80f93,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8c5bce49,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cc9d070a66f1b58cb19ce36d5704a2b4860f08ef66e7a3fdf45053aab85f41,PodSandboxId:0b003d807209937971c78c1497eb0f9994d18a26ae5549b13909f32c05e16e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721365738094912985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f47a28300cf663485b50cd518fcb7d7,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f1e721bf842a3d620ba86dae114b47de9657522af5859c7242fb35b4a83b1d,PodSandboxId:7328074d681e5935615d75500e8ad0c253e0e3a1473229761eee8d1391ad8cba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721365738028509710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5588bbfe3dddf8fa40abf5356eb59ab1,},Annotations:map[string]string{io.kubern
etes.container.hash: 2b8cd6e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02aaf6fcda9f6d69081e19ec33a949a8769926ac64503958755e56f7fa746247,PodSandboxId:367185f9ace76c6f8bc58a0dd3a05b21f069d462b436a9a0b0d49a29c69a56d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721365737998526146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84f1629b829277e2bf8444b8d79d3e9,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dab2955c-eb40-46ec-8d68-6423e8ed6bb4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.373857329Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff97c88c-a300-43e3-9ce1-1191709ca55a name=/runtime.v1.RuntimeService/Version
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.373943365Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff97c88c-a300-43e3-9ce1-1191709ca55a name=/runtime.v1.RuntimeService/Version
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.375351511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4edf5156-ba6e-429d-9cd5-df2298ce4fc9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.375817592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365755375793728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4edf5156-ba6e-429d-9cd5-df2298ce4fc9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.376635046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fb46ae22-ee4f-47a7-b1f7-79474496d8a5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.376691538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fb46ae22-ee4f-47a7-b1f7-79474496d8a5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.376869083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ed700320e145e0478c0f2acbe678950f4b8d55fbba6e5577a23ac0abcaf01d9,PodSandboxId:d0861ae577af4ada53849d021c74d48458dc7c628a7a7e22a1e93611d66c3760,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721365750444194433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-trt5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5467929-b2d5-447e-bb33-d477400b09b4,},Annotations:map[string]string{io.kubernetes.container.hash: a54babc2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed599861ca1d8ec941c035b8f07376a17f334581dc1dfa2ef4e809ec9db728bb,PodSandboxId:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365744472513381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{io.kubernetes.container.hash: b2300fb5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51581f359efdcd584c4856ca62c66810f51af2b49c5802f82228515146d4913f,PodSandboxId:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721365743435629391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{io.kubernetes.container.hash: b2300fb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e5cd6f6c78cdee180e3882b0e5e9b9dcf59b9ade6ba70aebba820a31929e02,PodSandboxId:464515d83100dbbb7a0d1e00da152c69507b2fc059fb9be0a25afc8489a10e90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721365743361639050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zncv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7ef787-9dc8-4
f6f-b729-a84aa1886c16,},Annotations:map[string]string{io.kubernetes.container.hash: 1951138e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e4e1035a17ada5f936bceb28c92104c29283004c21ee0f7e05b194458ed6ce,PodSandboxId:c7c401c2e72eb3d963f2dde47b0784ae35ab19e4ba075192bcddebb4ea4efa8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721365738077610058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44c100586dc057272ae853d8b8a80f93,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8c5bce49,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cc9d070a66f1b58cb19ce36d5704a2b4860f08ef66e7a3fdf45053aab85f41,PodSandboxId:0b003d807209937971c78c1497eb0f9994d18a26ae5549b13909f32c05e16e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721365738094912985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f47a28300cf663485b50cd518fcb7d7,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f1e721bf842a3d620ba86dae114b47de9657522af5859c7242fb35b4a83b1d,PodSandboxId:7328074d681e5935615d75500e8ad0c253e0e3a1473229761eee8d1391ad8cba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721365738028509710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5588bbfe3dddf8fa40abf5356eb59ab1,},Annotations:map[string]string{io.kubern
etes.container.hash: 2b8cd6e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02aaf6fcda9f6d69081e19ec33a949a8769926ac64503958755e56f7fa746247,PodSandboxId:367185f9ace76c6f8bc58a0dd3a05b21f069d462b436a9a0b0d49a29c69a56d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721365737998526146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84f1629b829277e2bf8444b8d79d3e9,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fb46ae22-ee4f-47a7-b1f7-79474496d8a5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.413656380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f5d4464a-a4f3-4016-b17d-f73b680f8a67 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.413736928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f5d4464a-a4f3-4016-b17d-f73b680f8a67 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.414916728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a437ab3a-423e-4f84-b556-75e11cd758c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.415378739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721365755415357376,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a437ab3a-423e-4f84-b556-75e11cd758c2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.415886992Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dd7a3d4-76ce-4d75-b29c-60c1688ceaff name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.415957543Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dd7a3d4-76ce-4d75-b29c-60c1688ceaff name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.416179870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ed700320e145e0478c0f2acbe678950f4b8d55fbba6e5577a23ac0abcaf01d9,PodSandboxId:d0861ae577af4ada53849d021c74d48458dc7c628a7a7e22a1e93611d66c3760,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721365750444194433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-trt5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5467929-b2d5-447e-bb33-d477400b09b4,},Annotations:map[string]string{io.kubernetes.container.hash: a54babc2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed599861ca1d8ec941c035b8f07376a17f334581dc1dfa2ef4e809ec9db728bb,PodSandboxId:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365744472513381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{io.kubernetes.container.hash: b2300fb5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51581f359efdcd584c4856ca62c66810f51af2b49c5802f82228515146d4913f,PodSandboxId:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721365743435629391,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{io.kubernetes.container.hash: b2300fb5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e5cd6f6c78cdee180e3882b0e5e9b9dcf59b9ade6ba70aebba820a31929e02,PodSandboxId:464515d83100dbbb7a0d1e00da152c69507b2fc059fb9be0a25afc8489a10e90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721365743361639050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zncv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7ef787-9dc8-4
f6f-b729-a84aa1886c16,},Annotations:map[string]string{io.kubernetes.container.hash: 1951138e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e4e1035a17ada5f936bceb28c92104c29283004c21ee0f7e05b194458ed6ce,PodSandboxId:c7c401c2e72eb3d963f2dde47b0784ae35ab19e4ba075192bcddebb4ea4efa8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721365738077610058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44c100586dc057272ae853d8b8a80f93,},Annotations:map[s
tring]string{io.kubernetes.container.hash: 8c5bce49,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cc9d070a66f1b58cb19ce36d5704a2b4860f08ef66e7a3fdf45053aab85f41,PodSandboxId:0b003d807209937971c78c1497eb0f9994d18a26ae5549b13909f32c05e16e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721365738094912985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f47a28300cf663485b50cd518fcb7d7,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f1e721bf842a3d620ba86dae114b47de9657522af5859c7242fb35b4a83b1d,PodSandboxId:7328074d681e5935615d75500e8ad0c253e0e3a1473229761eee8d1391ad8cba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721365738028509710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5588bbfe3dddf8fa40abf5356eb59ab1,},Annotations:map[string]string{io.kubern
etes.container.hash: 2b8cd6e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02aaf6fcda9f6d69081e19ec33a949a8769926ac64503958755e56f7fa746247,PodSandboxId:367185f9ace76c6f8bc58a0dd3a05b21f069d462b436a9a0b0d49a29c69a56d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721365737998526146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84f1629b829277e2bf8444b8d79d3e9,},Annotations:map[string]
string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dd7a3d4-76ce-4d75-b29c-60c1688ceaff name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.429046230Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d53f68b-8ec7-4f5d-971e-8ded8885ae91 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.429256656Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d0861ae577af4ada53849d021c74d48458dc7c628a7a7e22a1e93611d66c3760,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-trt5k,Uid:c5467929-b2d5-447e-bb33-d477400b09b4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365750237867265,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-trt5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5467929-b2d5-447e-bb33-d477400b09b4,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T05:09:02.331562189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:454dd667-bd69-4a62-9646-9174a0e0ba9c,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365743251888230,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-19T05:09:02.331557987Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:464515d83100dbbb7a0d1e00da152c69507b2fc059fb9be0a25afc8489a10e90,Metadata:&PodSandboxMetadata{Name:kube-proxy-zncv9,Uid:2c7ef787-9dc8-4f6f-b729-a84aa1886c16,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365743240870101,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zncv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7ef787-9dc8-4f6f-b729-a84aa1886c16,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T05:09:02.331574164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:367185f9ace76c6f8bc58a0dd3a05b21f069d462b436a9a0b0d49a29c69a56d6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-332657,Ui
d:d84f1629b829277e2bf8444b8d79d3e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365737859703946,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84f1629b829277e2bf8444b8d79d3e9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d84f1629b829277e2bf8444b8d79d3e9,kubernetes.io/config.seen: 2024-07-19T05:08:57.341805256Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b003d807209937971c78c1497eb0f9994d18a26ae5549b13909f32c05e16e05,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-332657,Uid:4f47a28300cf663485b50cd518fcb7d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365737857867305,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-332657,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f47a28300cf663485b50cd518fcb7d7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4f47a28300cf663485b50cd518fcb7d7,kubernetes.io/config.seen: 2024-07-19T05:08:57.341806346Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7c401c2e72eb3d963f2dde47b0784ae35ab19e4ba075192bcddebb4ea4efa8a,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-332657,Uid:44c100586dc057272ae853d8b8a80f93,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365737853520964,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44c100586dc057272ae853d8b8a80f93,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.207:2379,kubernetes.io/config.hash: 44c100586dc057272ae853d8b8a80f93,kubernetes.io/config.seen: 2024-07-19T05
:08:57.342153608Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7328074d681e5935615d75500e8ad0c253e0e3a1473229761eee8d1391ad8cba,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-332657,Uid:5588bbfe3dddf8fa40abf5356eb59ab1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365737849237938,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5588bbfe3dddf8fa40abf5356eb59ab1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.207:8443,kubernetes.io/config.hash: 5588bbfe3dddf8fa40abf5356eb59ab1,kubernetes.io/config.seen: 2024-07-19T05:08:57.341787305Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7d53f68b-8ec7-4f5d-971e-8ded8885ae91 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.430395112Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23f115ed-8bc8-4f43-aeba-591ec5319ec5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.430458699Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23f115ed-8bc8-4f43-aeba-591ec5319ec5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.430616254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ed700320e145e0478c0f2acbe678950f4b8d55fbba6e5577a23ac0abcaf01d9,PodSandboxId:d0861ae577af4ada53849d021c74d48458dc7c628a7a7e22a1e93611d66c3760,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721365750444194433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-trt5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5467929-b2d5-447e-bb33-d477400b09b4,},Annotations:map[string]string{io.kubernetes.container.hash: a54babc2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed599861ca1d8ec941c035b8f07376a17f334581dc1dfa2ef4e809ec9db728bb,PodSandboxId:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365744472513381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{io.kubernetes.container.hash: b2300fb5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e5cd6f6c78cdee180e3882b0e5e9b9dcf59b9ade6ba70aebba820a31929e02,PodSandboxId:464515d83100dbbb7a0d1e00da152c69507b2fc059fb9be0a25afc8489a10e90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721365743361639050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zncv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
7ef787-9dc8-4f6f-b729-a84aa1886c16,},Annotations:map[string]string{io.kubernetes.container.hash: 1951138e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e4e1035a17ada5f936bceb28c92104c29283004c21ee0f7e05b194458ed6ce,PodSandboxId:c7c401c2e72eb3d963f2dde47b0784ae35ab19e4ba075192bcddebb4ea4efa8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721365738077610058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44c100586dc057272ae853d8b8a80f93,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8c5bce49,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cc9d070a66f1b58cb19ce36d5704a2b4860f08ef66e7a3fdf45053aab85f41,PodSandboxId:0b003d807209937971c78c1497eb0f9994d18a26ae5549b13909f32c05e16e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721365738094912985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f47a28300cf663485b50cd518fcb7d7,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f1e721bf842a3d620ba86dae114b47de9657522af5859c7242fb35b4a83b1d,PodSandboxId:7328074d681e5935615d75500e8ad0c253e0e3a1473229761eee8d1391ad8cba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721365738028509710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5588bbfe3dddf8fa40abf5356eb59ab1,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 2b8cd6e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02aaf6fcda9f6d69081e19ec33a949a8769926ac64503958755e56f7fa746247,PodSandboxId:367185f9ace76c6f8bc58a0dd3a05b21f069d462b436a9a0b0d49a29c69a56d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721365737998526146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84f1629b829277e2bf8444b8d79d3e9,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23f115ed-8bc8-4f43-aeba-591ec5319ec5 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.431674836Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9207f569-1599-4b82-bc4a-b08d56b6d508 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.431826635Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d0861ae577af4ada53849d021c74d48458dc7c628a7a7e22a1e93611d66c3760,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-trt5k,Uid:c5467929-b2d5-447e-bb33-d477400b09b4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365750237867265,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-trt5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5467929-b2d5-447e-bb33-d477400b09b4,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T05:09:02.331562189Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:454dd667-bd69-4a62-9646-9174a0e0ba9c,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365743251888230,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-19T05:09:02.331557987Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:464515d83100dbbb7a0d1e00da152c69507b2fc059fb9be0a25afc8489a10e90,Metadata:&PodSandboxMetadata{Name:kube-proxy-zncv9,Uid:2c7ef787-9dc8-4f6f-b729-a84aa1886c16,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365743240870101,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zncv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c7ef787-9dc8-4f6f-b729-a84aa1886c16,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T05:09:02.331574164Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:367185f9ace76c6f8bc58a0dd3a05b21f069d462b436a9a0b0d49a29c69a56d6,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-332657,Ui
d:d84f1629b829277e2bf8444b8d79d3e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365737859703946,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84f1629b829277e2bf8444b8d79d3e9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d84f1629b829277e2bf8444b8d79d3e9,kubernetes.io/config.seen: 2024-07-19T05:08:57.341805256Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0b003d807209937971c78c1497eb0f9994d18a26ae5549b13909f32c05e16e05,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-332657,Uid:4f47a28300cf663485b50cd518fcb7d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365737857867305,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-332657,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f47a28300cf663485b50cd518fcb7d7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4f47a28300cf663485b50cd518fcb7d7,kubernetes.io/config.seen: 2024-07-19T05:08:57.341806346Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c7c401c2e72eb3d963f2dde47b0784ae35ab19e4ba075192bcddebb4ea4efa8a,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-332657,Uid:44c100586dc057272ae853d8b8a80f93,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365737853520964,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44c100586dc057272ae853d8b8a80f93,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.207:2379,kubernetes.io/config.hash: 44c100586dc057272ae853d8b8a80f93,kubernetes.io/config.seen: 2024-07-19T05
:08:57.342153608Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7328074d681e5935615d75500e8ad0c253e0e3a1473229761eee8d1391ad8cba,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-332657,Uid:5588bbfe3dddf8fa40abf5356eb59ab1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1721365737849237938,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5588bbfe3dddf8fa40abf5356eb59ab1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.207:8443,kubernetes.io/config.hash: 5588bbfe3dddf8fa40abf5356eb59ab1,kubernetes.io/config.seen: 2024-07-19T05:08:57.341787305Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9207f569-1599-4b82-bc4a-b08d56b6d508 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.432318263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=286066c8-527d-4028-a70e-b58e10288360 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.432362416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=286066c8-527d-4028-a70e-b58e10288360 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:09:15 test-preload-332657 crio[688]: time="2024-07-19 05:09:15.432516030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9ed700320e145e0478c0f2acbe678950f4b8d55fbba6e5577a23ac0abcaf01d9,PodSandboxId:d0861ae577af4ada53849d021c74d48458dc7c628a7a7e22a1e93611d66c3760,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1721365750444194433,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-trt5k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5467929-b2d5-447e-bb33-d477400b09b4,},Annotations:map[string]string{io.kubernetes.container.hash: a54babc2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed599861ca1d8ec941c035b8f07376a17f334581dc1dfa2ef4e809ec9db728bb,PodSandboxId:c957252c66179be099d8054cf1d8622567f5740921126d3e1aec233435f92e90,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1721365744472513381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 454dd667-bd69-4a62-9646-9174a0e0ba9c,},Annotations:map[string]string{io.kubernetes.container.hash: b2300fb5,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05e5cd6f6c78cdee180e3882b0e5e9b9dcf59b9ade6ba70aebba820a31929e02,PodSandboxId:464515d83100dbbb7a0d1e00da152c69507b2fc059fb9be0a25afc8489a10e90,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1721365743361639050,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zncv9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c
7ef787-9dc8-4f6f-b729-a84aa1886c16,},Annotations:map[string]string{io.kubernetes.container.hash: 1951138e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66e4e1035a17ada5f936bceb28c92104c29283004c21ee0f7e05b194458ed6ce,PodSandboxId:c7c401c2e72eb3d963f2dde47b0784ae35ab19e4ba075192bcddebb4ea4efa8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1721365738077610058,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44c100586dc057272ae853d8b8a80f93,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8c5bce49,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69cc9d070a66f1b58cb19ce36d5704a2b4860f08ef66e7a3fdf45053aab85f41,PodSandboxId:0b003d807209937971c78c1497eb0f9994d18a26ae5549b13909f32c05e16e05,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1721365738094912985,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f47a28300cf663485b50cd518fcb7d7,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f1e721bf842a3d620ba86dae114b47de9657522af5859c7242fb35b4a83b1d,PodSandboxId:7328074d681e5935615d75500e8ad0c253e0e3a1473229761eee8d1391ad8cba,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1721365738028509710,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5588bbfe3dddf8fa40abf5356eb59ab1,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 2b8cd6e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02aaf6fcda9f6d69081e19ec33a949a8769926ac64503958755e56f7fa746247,PodSandboxId:367185f9ace76c6f8bc58a0dd3a05b21f069d462b436a9a0b0d49a29c69a56d6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1721365737998526146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-332657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d84f1629b829277e2bf8444b8d79d3e9,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=286066c8-527d-4028-a70e-b58e10288360 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9ed700320e145       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   5 seconds ago       Running             coredns                   1                   d0861ae577af4       coredns-6d4b75cb6d-trt5k
	ed599861ca1d8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       2                   c957252c66179       storage-provisioner
	51581f359efdc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 seconds ago      Exited              storage-provisioner       1                   c957252c66179       storage-provisioner
	05e5cd6f6c78c       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   12 seconds ago      Running             kube-proxy                1                   464515d83100d       kube-proxy-zncv9
	69cc9d070a66f       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   17 seconds ago      Running             kube-scheduler            1                   0b003d8072099       kube-scheduler-test-preload-332657
	66e4e1035a17a       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   17 seconds ago      Running             etcd                      1                   c7c401c2e72eb       etcd-test-preload-332657
	d9f1e721bf842       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   17 seconds ago      Running             kube-apiserver            1                   7328074d681e5       kube-apiserver-test-preload-332657
	02aaf6fcda9f6       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   17 seconds ago      Running             kube-controller-manager   1                   367185f9ace76       kube-controller-manager-test-preload-332657
	
	
	==> coredns [9ed700320e145e0478c0f2acbe678950f4b8d55fbba6e5577a23ac0abcaf01d9] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:37586 - 62015 "HINFO IN 3947506576853369016.4182001214271629271. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008692464s
	
	
	==> describe nodes <==
	Name:               test-preload-332657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-332657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=test-preload-332657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T05_07_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 05:07:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-332657
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 05:09:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 05:09:12 +0000   Fri, 19 Jul 2024 05:07:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 05:09:12 +0000   Fri, 19 Jul 2024 05:07:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 05:09:12 +0000   Fri, 19 Jul 2024 05:07:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 05:09:12 +0000   Fri, 19 Jul 2024 05:09:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    test-preload-332657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 95c30f8281a9441d8082f8c117afcf54
	  System UUID:                95c30f82-81a9-441d-8082-f8c117afcf54
	  Boot ID:                    38b521d5-320e-4bd5-b6fe-ec3c4bae2d7e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-trt5k                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 etcd-test-preload-332657                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-test-preload-332657             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-test-preload-332657    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-zncv9                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-test-preload-332657             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 11s                  kube-proxy       
	  Normal  Starting                 79s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x5 over 101s)  kubelet          Node test-preload-332657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x4 over 101s)  kubelet          Node test-preload-332657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x4 over 101s)  kubelet          Node test-preload-332657 status is now: NodeHasSufficientPID
	  Normal  Starting                 93s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  93s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  93s                  kubelet          Node test-preload-332657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s                  kubelet          Node test-preload-332657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s                  kubelet          Node test-preload-332657 status is now: NodeHasSufficientPID
	  Normal  NodeReady                82s                  kubelet          Node test-preload-332657 status is now: NodeReady
	  Normal  RegisteredNode           81s                  node-controller  Node test-preload-332657 event: Registered Node test-preload-332657 in Controller
	  Normal  Starting                 18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)    kubelet          Node test-preload-332657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)    kubelet          Node test-preload-332657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)    kubelet          Node test-preload-332657 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                   node-controller  Node test-preload-332657 event: Registered Node test-preload-332657 in Controller
	
	
	==> dmesg <==
	[Jul19 05:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050362] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.035870] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.427239] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.737900] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.523493] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.573050] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.059872] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.046962] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.171288] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.115368] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.247594] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[ +12.478172] systemd-fstab-generator[950]: Ignoring "noauto" option for root device
	[  +0.062872] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.778315] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	[Jul19 05:09] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.269075] systemd-fstab-generator[1748]: Ignoring "noauto" option for root device
	[  +5.136707] kauditd_printk_skb: 58 callbacks suppressed
	
	
	==> etcd [66e4e1035a17ada5f936bceb28c92104c29283004c21ee0f7e05b194458ed6ce] <==
	{"level":"info","ts":"2024-07-19T05:08:58.612Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ead4a4b8bd8924e3","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-19T05:08:58.619Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-19T05:08:58.619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 switched to configuration voters=(16921330813298615523)"}
	{"level":"info","ts":"2024-07-19T05:08:58.619Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","added-peer-id":"ead4a4b8bd8924e3","added-peer-peer-urls":["https://192.168.39.207:2380"]}
	{"level":"info","ts":"2024-07-19T05:08:58.619Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7fc3162940ce7ea7","local-member-id":"ead4a4b8bd8924e3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:08:58.619Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:08:58.634Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T05:08:58.634Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ead4a4b8bd8924e3","initial-advertise-peer-urls":["https://192.168.39.207:2380"],"listen-peer-urls":["https://192.168.39.207:2380"],"advertise-client-urls":["https://192.168.39.207:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.207:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T05:08:58.634Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T05:08:58.638Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.207:2380"}
	{"level":"info","ts":"2024-07-19T05:08:58.638Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.207:2380"}
	{"level":"info","ts":"2024-07-19T05:08:59.782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T05:08:59.783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T05:08:59.783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 received MsgPreVoteResp from ead4a4b8bd8924e3 at term 2"}
	{"level":"info","ts":"2024-07-19T05:08:59.783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T05:08:59.783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 received MsgVoteResp from ead4a4b8bd8924e3 at term 3"}
	{"level":"info","ts":"2024-07-19T05:08:59.783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ead4a4b8bd8924e3 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T05:08:59.783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ead4a4b8bd8924e3 elected leader ead4a4b8bd8924e3 at term 3"}
	{"level":"info","ts":"2024-07-19T05:08:59.784Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ead4a4b8bd8924e3","local-member-attributes":"{Name:test-preload-332657 ClientURLs:[https://192.168.39.207:2379]}","request-path":"/0/members/ead4a4b8bd8924e3/attributes","cluster-id":"7fc3162940ce7ea7","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T05:08:59.785Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T05:08:59.785Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T05:08:59.786Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T05:08:59.787Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.207:2379"}
	{"level":"info","ts":"2024-07-19T05:08:59.787Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T05:08:59.787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 05:09:15 up 0 min,  0 users,  load average: 0.61, 0.17, 0.06
	Linux test-preload-332657 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d9f1e721bf842a3d620ba86dae114b47de9657522af5859c7242fb35b4a83b1d] <==
	I0719 05:09:02.142413       1 controller.go:85] Starting OpenAPI V3 controller
	I0719 05:09:02.142458       1 naming_controller.go:291] Starting NamingConditionController
	I0719 05:09:02.142708       1 establishing_controller.go:76] Starting EstablishingController
	I0719 05:09:02.142776       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0719 05:09:02.142816       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0719 05:09:02.142853       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0719 05:09:02.234179       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 05:09:02.242209       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0719 05:09:02.245504       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0719 05:09:02.308420       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0719 05:09:02.320833       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0719 05:09:02.322135       1 cache.go:39] Caches are synced for autoregister controller
	I0719 05:09:02.322424       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 05:09:02.326825       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 05:09:02.326932       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0719 05:09:02.768827       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0719 05:09:03.124077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 05:09:03.604767       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0719 05:09:03.616313       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0719 05:09:03.645378       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0719 05:09:03.671641       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 05:09:03.677563       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 05:09:03.785355       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0719 05:09:14.621151       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 05:09:14.715607       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [02aaf6fcda9f6d69081e19ec33a949a8769926ac64503958755e56f7fa746247] <==
	I0719 05:09:14.577144       1 shared_informer.go:262] Caches are synced for node
	I0719 05:09:14.577171       1 range_allocator.go:173] Starting range CIDR allocator
	I0719 05:09:14.577176       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0719 05:09:14.577264       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0719 05:09:14.578326       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0719 05:09:14.579873       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0719 05:09:14.580337       1 shared_informer.go:262] Caches are synced for attach detach
	I0719 05:09:14.581923       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0719 05:09:14.583733       1 shared_informer.go:262] Caches are synced for cronjob
	I0719 05:09:14.589147       1 shared_informer.go:262] Caches are synced for endpoint
	I0719 05:09:14.596347       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0719 05:09:14.597497       1 shared_informer.go:262] Caches are synced for service account
	I0719 05:09:14.598808       1 shared_informer.go:262] Caches are synced for job
	I0719 05:09:14.601133       1 shared_informer.go:262] Caches are synced for stateful set
	I0719 05:09:14.604674       1 shared_informer.go:262] Caches are synced for persistent volume
	I0719 05:09:14.606754       1 shared_informer.go:262] Caches are synced for crt configmap
	I0719 05:09:14.607757       1 shared_informer.go:262] Caches are synced for ephemeral
	I0719 05:09:14.629466       1 shared_informer.go:262] Caches are synced for namespace
	I0719 05:09:14.672109       1 shared_informer.go:262] Caches are synced for HPA
	I0719 05:09:14.702664       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0719 05:09:14.796802       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 05:09:14.823956       1 shared_informer.go:262] Caches are synced for resource quota
	I0719 05:09:15.233382       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 05:09:15.235647       1 shared_informer.go:262] Caches are synced for garbage collector
	I0719 05:09:15.235679       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [05e5cd6f6c78cdee180e3882b0e5e9b9dcf59b9ade6ba70aebba820a31929e02] <==
	I0719 05:09:03.749318       1 node.go:163] Successfully retrieved node IP: 192.168.39.207
	I0719 05:09:03.749372       1 server_others.go:138] "Detected node IP" address="192.168.39.207"
	I0719 05:09:03.749421       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0719 05:09:03.778054       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0719 05:09:03.778079       1 server_others.go:206] "Using iptables Proxier"
	I0719 05:09:03.778121       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0719 05:09:03.778793       1 server.go:661] "Version info" version="v1.24.4"
	I0719 05:09:03.778817       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:09:03.780326       1 config.go:317] "Starting service config controller"
	I0719 05:09:03.780692       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0719 05:09:03.780763       1 config.go:226] "Starting endpoint slice config controller"
	I0719 05:09:03.780780       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0719 05:09:03.781795       1 config.go:444] "Starting node config controller"
	I0719 05:09:03.781815       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0719 05:09:03.881512       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0719 05:09:03.881544       1 shared_informer.go:262] Caches are synced for service config
	I0719 05:09:03.882383       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [69cc9d070a66f1b58cb19ce36d5704a2b4860f08ef66e7a3fdf45053aab85f41] <==
	I0719 05:08:59.289563       1 serving.go:348] Generated self-signed cert in-memory
	W0719 05:09:02.192401       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 05:09:02.192632       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 05:09:02.192723       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 05:09:02.192752       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 05:09:02.259183       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0719 05:09:02.259928       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:09:02.266525       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 05:09:02.266714       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 05:09:02.267280       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0719 05:09:02.267393       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 05:09:02.367169       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.333515    1086 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.333657    1086 topology_manager.go:200] "Topology Admit Handler"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: E0719 05:09:02.336655    1086 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-trt5k" podUID=c5467929-b2d5-447e-bb33-d477400b09b4
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: E0719 05:09:02.384095    1086 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.398614    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4f2s\" (UniqueName: \"kubernetes.io/projected/c5467929-b2d5-447e-bb33-d477400b09b4-kube-api-access-f4f2s\") pod \"coredns-6d4b75cb6d-trt5k\" (UID: \"c5467929-b2d5-447e-bb33-d477400b09b4\") " pod="kube-system/coredns-6d4b75cb6d-trt5k"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.398673    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcftt\" (UniqueName: \"kubernetes.io/projected/454dd667-bd69-4a62-9646-9174a0e0ba9c-kube-api-access-kcftt\") pod \"storage-provisioner\" (UID: \"454dd667-bd69-4a62-9646-9174a0e0ba9c\") " pod="kube-system/storage-provisioner"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.398704    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c7ef787-9dc8-4f6f-b729-a84aa1886c16-xtables-lock\") pod \"kube-proxy-zncv9\" (UID: \"2c7ef787-9dc8-4f6f-b729-a84aa1886c16\") " pod="kube-system/kube-proxy-zncv9"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.398724    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c7ef787-9dc8-4f6f-b729-a84aa1886c16-lib-modules\") pod \"kube-proxy-zncv9\" (UID: \"2c7ef787-9dc8-4f6f-b729-a84aa1886c16\") " pod="kube-system/kube-proxy-zncv9"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.398747    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjlm\" (UniqueName: \"kubernetes.io/projected/2c7ef787-9dc8-4f6f-b729-a84aa1886c16-kube-api-access-tqjlm\") pod \"kube-proxy-zncv9\" (UID: \"2c7ef787-9dc8-4f6f-b729-a84aa1886c16\") " pod="kube-system/kube-proxy-zncv9"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.398766    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c7ef787-9dc8-4f6f-b729-a84aa1886c16-kube-proxy\") pod \"kube-proxy-zncv9\" (UID: \"2c7ef787-9dc8-4f6f-b729-a84aa1886c16\") " pod="kube-system/kube-proxy-zncv9"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.398784    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/454dd667-bd69-4a62-9646-9174a0e0ba9c-tmp\") pod \"storage-provisioner\" (UID: \"454dd667-bd69-4a62-9646-9174a0e0ba9c\") " pod="kube-system/storage-provisioner"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.398807    1086 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5467929-b2d5-447e-bb33-d477400b09b4-config-volume\") pod \"coredns-6d4b75cb6d-trt5k\" (UID: \"c5467929-b2d5-447e-bb33-d477400b09b4\") " pod="kube-system/coredns-6d4b75cb6d-trt5k"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: I0719 05:09:02.398821    1086 reconciler.go:159] "Reconciler: start to sync state"
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: E0719 05:09:02.502319    1086 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 05:09:02 test-preload-332657 kubelet[1086]: E0719 05:09:02.502499    1086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/c5467929-b2d5-447e-bb33-d477400b09b4-config-volume podName:c5467929-b2d5-447e-bb33-d477400b09b4 nodeName:}" failed. No retries permitted until 2024-07-19 05:09:03.002429563 +0000 UTC m=+5.795243028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5467929-b2d5-447e-bb33-d477400b09b4-config-volume") pod "coredns-6d4b75cb6d-trt5k" (UID: "c5467929-b2d5-447e-bb33-d477400b09b4") : object "kube-system"/"coredns" not registered
	Jul 19 05:09:03 test-preload-332657 kubelet[1086]: E0719 05:09:03.007175    1086 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 05:09:03 test-preload-332657 kubelet[1086]: E0719 05:09:03.007301    1086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/c5467929-b2d5-447e-bb33-d477400b09b4-config-volume podName:c5467929-b2d5-447e-bb33-d477400b09b4 nodeName:}" failed. No retries permitted until 2024-07-19 05:09:04.007278357 +0000 UTC m=+6.800091828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5467929-b2d5-447e-bb33-d477400b09b4-config-volume") pod "coredns-6d4b75cb6d-trt5k" (UID: "c5467929-b2d5-447e-bb33-d477400b09b4") : object "kube-system"/"coredns" not registered
	Jul 19 05:09:04 test-preload-332657 kubelet[1086]: E0719 05:09:04.014876    1086 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 05:09:04 test-preload-332657 kubelet[1086]: E0719 05:09:04.014960    1086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/c5467929-b2d5-447e-bb33-d477400b09b4-config-volume podName:c5467929-b2d5-447e-bb33-d477400b09b4 nodeName:}" failed. No retries permitted until 2024-07-19 05:09:06.014945196 +0000 UTC m=+8.807758659 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5467929-b2d5-447e-bb33-d477400b09b4-config-volume") pod "coredns-6d4b75cb6d-trt5k" (UID: "c5467929-b2d5-447e-bb33-d477400b09b4") : object "kube-system"/"coredns" not registered
	Jul 19 05:09:04 test-preload-332657 kubelet[1086]: E0719 05:09:04.425232    1086 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-trt5k" podUID=c5467929-b2d5-447e-bb33-d477400b09b4
	Jul 19 05:09:04 test-preload-332657 kubelet[1086]: I0719 05:09:04.461209    1086 scope.go:110] "RemoveContainer" containerID="51581f359efdcd584c4856ca62c66810f51af2b49c5802f82228515146d4913f"
	Jul 19 05:09:05 test-preload-332657 kubelet[1086]: I0719 05:09:05.430849    1086 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d66b5dfa-29e7-45e9-b6b4-cfb79ac6a42c path="/var/lib/kubelet/pods/d66b5dfa-29e7-45e9-b6b4-cfb79ac6a42c/volumes"
	Jul 19 05:09:06 test-preload-332657 kubelet[1086]: E0719 05:09:06.028504    1086 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 05:09:06 test-preload-332657 kubelet[1086]: E0719 05:09:06.028615    1086 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/c5467929-b2d5-447e-bb33-d477400b09b4-config-volume podName:c5467929-b2d5-447e-bb33-d477400b09b4 nodeName:}" failed. No retries permitted until 2024-07-19 05:09:10.028564832 +0000 UTC m=+12.821378296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c5467929-b2d5-447e-bb33-d477400b09b4-config-volume") pod "coredns-6d4b75cb6d-trt5k" (UID: "c5467929-b2d5-447e-bb33-d477400b09b4") : object "kube-system"/"coredns" not registered
	Jul 19 05:09:06 test-preload-332657 kubelet[1086]: E0719 05:09:06.426311    1086 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-trt5k" podUID=c5467929-b2d5-447e-bb33-d477400b09b4
	
	
	==> storage-provisioner [51581f359efdcd584c4856ca62c66810f51af2b49c5802f82228515146d4913f] <==
	I0719 05:09:03.552537       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0719 05:09:03.559702       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [ed599861ca1d8ec941c035b8f07376a17f334581dc1dfa2ef4e809ec9db728bb] <==
	I0719 05:09:04.535743       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 05:09:04.547526       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 05:09:04.547630       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-332657 -n test-preload-332657
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-332657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-332657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-332657
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-332657: (1.099509812s)
--- FAIL: TestPreload (186.30s)

                                                
                                    
x
+
TestKubernetesUpgrade (414.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-678139 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-678139 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m55.989200257s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-678139] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-678139" primary control-plane node in "kubernetes-upgrade-678139" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 05:11:08.910031  169271 out.go:291] Setting OutFile to fd 1 ...
	I0719 05:11:08.910304  169271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:11:08.910314  169271 out.go:304] Setting ErrFile to fd 2...
	I0719 05:11:08.910319  169271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:11:08.910506  169271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 05:11:08.911071  169271 out.go:298] Setting JSON to false
	I0719 05:11:08.912078  169271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10412,"bootTime":1721355457,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 05:11:08.912163  169271 start.go:139] virtualization: kvm guest
	I0719 05:11:08.913868  169271 out.go:177] * [kubernetes-upgrade-678139] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 05:11:08.915625  169271 notify.go:220] Checking for updates...
	I0719 05:11:08.915666  169271 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:11:08.917045  169271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:11:08.919047  169271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 05:11:08.920530  169271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 05:11:08.922303  169271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 05:11:08.925258  169271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:11:08.926946  169271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:11:08.966272  169271 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 05:11:08.967502  169271 start.go:297] selected driver: kvm2
	I0719 05:11:08.967529  169271 start.go:901] validating driver "kvm2" against <nil>
	I0719 05:11:08.967543  169271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:11:08.968504  169271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:11:08.977759  169271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 05:11:08.997923  169271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 05:11:08.998036  169271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 05:11:08.998377  169271 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 05:11:08.998410  169271 cni.go:84] Creating CNI manager for ""
	I0719 05:11:08.998421  169271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 05:11:08.998438  169271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 05:11:08.998586  169271 start.go:340] cluster config:
	{Name:kubernetes-upgrade-678139 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-678139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:11:08.998754  169271 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:11:09.000789  169271 out.go:177] * Starting "kubernetes-upgrade-678139" primary control-plane node in "kubernetes-upgrade-678139" cluster
	I0719 05:11:09.002069  169271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 05:11:09.002127  169271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 05:11:09.002139  169271 cache.go:56] Caching tarball of preloaded images
	I0719 05:11:09.002294  169271 preload.go:172] Found /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0719 05:11:09.002317  169271 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0719 05:11:09.002755  169271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/config.json ...
	I0719 05:11:09.002795  169271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/config.json: {Name:mk90cecb9181356957b47c27cbc091efa1038369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:11:09.002966  169271 start.go:360] acquireMachinesLock for kubernetes-upgrade-678139: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 05:11:33.422119  169271 start.go:364] duration metric: took 24.419123542s to acquireMachinesLock for "kubernetes-upgrade-678139"
	I0719 05:11:33.422203  169271 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-678139 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-678139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 05:11:33.422317  169271 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 05:11:33.424581  169271 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 05:11:33.424807  169271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:11:33.424869  169271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:11:33.442688  169271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I0719 05:11:33.443251  169271 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:11:33.443906  169271 main.go:141] libmachine: Using API Version  1
	I0719 05:11:33.443935  169271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:11:33.444353  169271 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:11:33.444545  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetMachineName
	I0719 05:11:33.444715  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:11:33.444873  169271 start.go:159] libmachine.API.Create for "kubernetes-upgrade-678139" (driver="kvm2")
	I0719 05:11:33.444897  169271 client.go:168] LocalClient.Create starting
	I0719 05:11:33.444934  169271 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem
	I0719 05:11:33.444972  169271 main.go:141] libmachine: Decoding PEM data...
	I0719 05:11:33.444989  169271 main.go:141] libmachine: Parsing certificate...
	I0719 05:11:33.445057  169271 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem
	I0719 05:11:33.445104  169271 main.go:141] libmachine: Decoding PEM data...
	I0719 05:11:33.445121  169271 main.go:141] libmachine: Parsing certificate...
	I0719 05:11:33.445144  169271 main.go:141] libmachine: Running pre-create checks...
	I0719 05:11:33.445156  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .PreCreateCheck
	I0719 05:11:33.445491  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetConfigRaw
	I0719 05:11:33.445893  169271 main.go:141] libmachine: Creating machine...
	I0719 05:11:33.445912  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .Create
	I0719 05:11:33.446131  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Creating KVM machine...
	I0719 05:11:33.447225  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found existing default KVM network
	I0719 05:11:33.448118  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:33.447968  169586 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:24:ad} reservation:<nil>}
	I0719 05:11:33.448892  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:33.448811  169586 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fc80}
	I0719 05:11:33.448918  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | created network xml: 
	I0719 05:11:33.448930  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | <network>
	I0719 05:11:33.448944  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG |   <name>mk-kubernetes-upgrade-678139</name>
	I0719 05:11:33.448957  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG |   <dns enable='no'/>
	I0719 05:11:33.448968  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG |   
	I0719 05:11:33.448980  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0719 05:11:33.448991  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG |     <dhcp>
	I0719 05:11:33.449001  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0719 05:11:33.449013  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG |     </dhcp>
	I0719 05:11:33.449043  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG |   </ip>
	I0719 05:11:33.449081  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG |   
	I0719 05:11:33.449095  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | </network>
	I0719 05:11:33.449107  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | 
	I0719 05:11:33.454887  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | trying to create private KVM network mk-kubernetes-upgrade-678139 192.168.50.0/24...
	I0719 05:11:33.525544  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | private KVM network mk-kubernetes-upgrade-678139 192.168.50.0/24 created
	I0719 05:11:33.525582  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:33.525514  169586 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 05:11:33.525602  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Setting up store path in /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139 ...
	I0719 05:11:33.525625  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Building disk image from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 05:11:33.525644  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Downloading /home/jenkins/minikube-integration/19302-122995/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 05:11:33.764938  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:33.764797  169586 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa...
	I0719 05:11:33.926217  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:33.926119  169586 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/kubernetes-upgrade-678139.rawdisk...
	I0719 05:11:33.926250  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Writing magic tar header
	I0719 05:11:33.926266  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Writing SSH key tar header
	I0719 05:11:33.926278  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:33.926250  169586 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139 ...
	I0719 05:11:33.926415  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139 (perms=drwx------)
	I0719 05:11:33.926437  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139
	I0719 05:11:33.926445  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines (perms=drwxr-xr-x)
	I0719 05:11:33.926462  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube (perms=drwxr-xr-x)
	I0719 05:11:33.926476  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines
	I0719 05:11:33.926489  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995 (perms=drwxrwxr-x)
	I0719 05:11:33.926503  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 05:11:33.926518  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 05:11:33.926532  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 05:11:33.926543  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Creating domain...
	I0719 05:11:33.926563  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995
	I0719 05:11:33.926575  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 05:11:33.926588  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Checking permissions on dir: /home/jenkins
	I0719 05:11:33.926598  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Checking permissions on dir: /home
	I0719 05:11:33.926657  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Skipping /home - not owner
	I0719 05:11:33.927575  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) define libvirt domain using xml: 
	I0719 05:11:33.927597  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) <domain type='kvm'>
	I0719 05:11:33.927606  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   <name>kubernetes-upgrade-678139</name>
	I0719 05:11:33.927611  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   <memory unit='MiB'>2200</memory>
	I0719 05:11:33.927617  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   <vcpu>2</vcpu>
	I0719 05:11:33.927627  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   <features>
	I0719 05:11:33.927635  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <acpi/>
	I0719 05:11:33.927642  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <apic/>
	I0719 05:11:33.927649  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <pae/>
	I0719 05:11:33.927660  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     
	I0719 05:11:33.927691  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   </features>
	I0719 05:11:33.927718  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   <cpu mode='host-passthrough'>
	I0719 05:11:33.927748  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   
	I0719 05:11:33.927770  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   </cpu>
	I0719 05:11:33.927780  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   <os>
	I0719 05:11:33.927791  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <type>hvm</type>
	I0719 05:11:33.927801  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <boot dev='cdrom'/>
	I0719 05:11:33.927813  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <boot dev='hd'/>
	I0719 05:11:33.927827  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <bootmenu enable='no'/>
	I0719 05:11:33.927837  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   </os>
	I0719 05:11:33.927846  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   <devices>
	I0719 05:11:33.927863  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <disk type='file' device='cdrom'>
	I0719 05:11:33.927879  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/boot2docker.iso'/>
	I0719 05:11:33.927892  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <target dev='hdc' bus='scsi'/>
	I0719 05:11:33.927902  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <readonly/>
	I0719 05:11:33.927915  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     </disk>
	I0719 05:11:33.927926  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <disk type='file' device='disk'>
	I0719 05:11:33.927940  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 05:11:33.927959  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/kubernetes-upgrade-678139.rawdisk'/>
	I0719 05:11:33.927970  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <target dev='hda' bus='virtio'/>
	I0719 05:11:33.927982  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     </disk>
	I0719 05:11:33.927991  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <interface type='network'>
	I0719 05:11:33.928008  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <source network='mk-kubernetes-upgrade-678139'/>
	I0719 05:11:33.928027  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <model type='virtio'/>
	I0719 05:11:33.928040  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     </interface>
	I0719 05:11:33.928052  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <interface type='network'>
	I0719 05:11:33.928063  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <source network='default'/>
	I0719 05:11:33.928074  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <model type='virtio'/>
	I0719 05:11:33.928083  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     </interface>
	I0719 05:11:33.928094  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <serial type='pty'>
	I0719 05:11:33.928103  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <target port='0'/>
	I0719 05:11:33.928118  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     </serial>
	I0719 05:11:33.928131  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <console type='pty'>
	I0719 05:11:33.928143  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <target type='serial' port='0'/>
	I0719 05:11:33.928155  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     </console>
	I0719 05:11:33.928166  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     <rng model='virtio'>
	I0719 05:11:33.928176  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)       <backend model='random'>/dev/random</backend>
	I0719 05:11:33.928185  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     </rng>
	I0719 05:11:33.928199  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     
	I0719 05:11:33.928212  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)     
	I0719 05:11:33.928238  169271 main.go:141] libmachine: (kubernetes-upgrade-678139)   </devices>
	I0719 05:11:33.928254  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) </domain>
	I0719 05:11:33.928268  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) 
	I0719 05:11:33.937325  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:3e:4a:4c in network default
	I0719 05:11:33.937917  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Ensuring networks are active...
	I0719 05:11:33.937943  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:33.938695  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Ensuring network default is active
	I0719 05:11:33.939041  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Ensuring network mk-kubernetes-upgrade-678139 is active
	I0719 05:11:33.939572  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Getting domain xml...
	I0719 05:11:33.940238  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Creating domain...
	I0719 05:11:35.218653  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Waiting to get IP...
	I0719 05:11:35.219871  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:35.220432  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:35.220466  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:35.220331  169586 retry.go:31] will retry after 204.59445ms: waiting for machine to come up
	I0719 05:11:35.426871  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:35.427375  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:35.427405  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:35.427319  169586 retry.go:31] will retry after 242.943198ms: waiting for machine to come up
	I0719 05:11:35.671739  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:35.672242  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:35.672278  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:35.672206  169586 retry.go:31] will retry after 336.176286ms: waiting for machine to come up
	I0719 05:11:36.009657  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:36.010021  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:36.010052  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:36.009979  169586 retry.go:31] will retry after 551.341938ms: waiting for machine to come up
	I0719 05:11:36.563382  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:36.563882  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:36.563922  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:36.563833  169586 retry.go:31] will retry after 566.518635ms: waiting for machine to come up
	I0719 05:11:37.131648  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:37.132054  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:37.132089  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:37.132004  169586 retry.go:31] will retry after 884.684974ms: waiting for machine to come up
	I0719 05:11:38.018166  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:38.018641  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:38.018669  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:38.018579  169586 retry.go:31] will retry after 980.959173ms: waiting for machine to come up
	I0719 05:11:39.001486  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:39.001987  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:39.002035  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:39.001939  169586 retry.go:31] will retry after 1.343273942s: waiting for machine to come up
	I0719 05:11:40.347706  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:40.348246  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:40.348334  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:40.348260  169586 retry.go:31] will retry after 1.493214639s: waiting for machine to come up
	I0719 05:11:41.842805  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:41.843294  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:41.843330  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:41.843242  169586 retry.go:31] will retry after 1.742983384s: waiting for machine to come up
	I0719 05:11:43.587733  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:43.588171  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:43.588193  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:43.588135  169586 retry.go:31] will retry after 2.304656116s: waiting for machine to come up
	I0719 05:11:45.894419  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:45.894838  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:45.894865  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:45.894800  169586 retry.go:31] will retry after 2.750483148s: waiting for machine to come up
	I0719 05:11:48.646853  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:48.647278  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:48.647300  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:48.647260  169586 retry.go:31] will retry after 2.8876466s: waiting for machine to come up
	I0719 05:11:51.538448  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:51.538861  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find current IP address of domain kubernetes-upgrade-678139 in network mk-kubernetes-upgrade-678139
	I0719 05:11:51.538885  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | I0719 05:11:51.538804  169586 retry.go:31] will retry after 4.300079577s: waiting for machine to come up
	I0719 05:11:55.841450  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:55.841920  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Found IP for machine: 192.168.50.182
	I0719 05:11:55.841947  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Reserving static IP address...
	I0719 05:11:55.841962  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has current primary IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:55.842201  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-678139", mac: "52:54:00:77:3f:2e", ip: "192.168.50.182"} in network mk-kubernetes-upgrade-678139
	I0719 05:11:55.914507  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Getting to WaitForSSH function...
	I0719 05:11:55.914540  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Reserved static IP address: 192.168.50.182
	I0719 05:11:55.914554  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Waiting for SSH to be available...
	I0719 05:11:55.917213  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:55.917562  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139
	I0719 05:11:55.917584  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-678139 interface with MAC address 52:54:00:77:3f:2e
	I0719 05:11:55.917749  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Using SSH client type: external
	I0719 05:11:55.917780  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa (-rw-------)
	I0719 05:11:55.917820  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 05:11:55.917835  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | About to run SSH command:
	I0719 05:11:55.917851  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | exit 0
	I0719 05:11:55.921795  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | SSH cmd err, output: exit status 255: 
	I0719 05:11:55.921827  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0719 05:11:55.921839  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | command : exit 0
	I0719 05:11:55.921847  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | err     : exit status 255
	I0719 05:11:55.921859  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | output  : 
	I0719 05:11:58.922297  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Getting to WaitForSSH function...
	I0719 05:11:58.924705  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:58.925141  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:11:58.925175  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:58.925234  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Using SSH client type: external
	I0719 05:11:58.925258  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Using SSH private key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa (-rw-------)
	I0719 05:11:58.925294  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0719 05:11:58.925315  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | About to run SSH command:
	I0719 05:11:58.925328  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | exit 0
	I0719 05:11:59.048829  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | SSH cmd err, output: <nil>: 
	I0719 05:11:59.049144  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) KVM machine creation complete!
	I0719 05:11:59.049460  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetConfigRaw
	I0719 05:11:59.049973  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:11:59.050155  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:11:59.050303  169271 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0719 05:11:59.050318  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetState
	I0719 05:11:59.051488  169271 main.go:141] libmachine: Detecting operating system of created instance...
	I0719 05:11:59.051502  169271 main.go:141] libmachine: Waiting for SSH to be available...
	I0719 05:11:59.051507  169271 main.go:141] libmachine: Getting to WaitForSSH function...
	I0719 05:11:59.051514  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:11:59.053584  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.053872  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:11:59.053903  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.054033  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:11:59.054222  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.054409  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.054583  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:11:59.054754  169271 main.go:141] libmachine: Using SSH client type: native
	I0719 05:11:59.054991  169271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:11:59.055003  169271 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0719 05:11:59.160920  169271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:11:59.160948  169271 main.go:141] libmachine: Detecting the provisioner...
	I0719 05:11:59.160960  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:11:59.163730  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.164060  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:11:59.164094  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.164232  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:11:59.164435  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.164615  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.164765  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:11:59.164927  169271 main.go:141] libmachine: Using SSH client type: native
	I0719 05:11:59.165184  169271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:11:59.165198  169271 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0719 05:11:59.269233  169271 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0719 05:11:59.269330  169271 main.go:141] libmachine: found compatible host: buildroot
	I0719 05:11:59.269340  169271 main.go:141] libmachine: Provisioning with buildroot...
	I0719 05:11:59.269348  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetMachineName
	I0719 05:11:59.269598  169271 buildroot.go:166] provisioning hostname "kubernetes-upgrade-678139"
	I0719 05:11:59.269626  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetMachineName
	I0719 05:11:59.269804  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:11:59.272388  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.272753  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:11:59.272784  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.272926  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:11:59.273154  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.273328  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.273451  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:11:59.273623  169271 main.go:141] libmachine: Using SSH client type: native
	I0719 05:11:59.273800  169271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:11:59.273817  169271 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-678139 && echo "kubernetes-upgrade-678139" | sudo tee /etc/hostname
	I0719 05:11:59.390006  169271 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-678139
	
	I0719 05:11:59.390039  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:11:59.392552  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.392802  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:11:59.392832  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.393006  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:11:59.393232  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.393427  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.393571  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:11:59.393749  169271 main.go:141] libmachine: Using SSH client type: native
	I0719 05:11:59.393914  169271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:11:59.393930  169271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-678139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-678139/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-678139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:11:59.509514  169271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:11:59.509543  169271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 05:11:59.509581  169271 buildroot.go:174] setting up certificates
	I0719 05:11:59.509592  169271 provision.go:84] configureAuth start
	I0719 05:11:59.509605  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetMachineName
	I0719 05:11:59.509929  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetIP
	I0719 05:11:59.512445  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.512797  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:11:59.512826  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.512966  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:11:59.515158  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.515449  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:11:59.515481  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.515571  169271 provision.go:143] copyHostCerts
	I0719 05:11:59.515639  169271 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 05:11:59.515657  169271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 05:11:59.515714  169271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 05:11:59.515808  169271 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 05:11:59.515826  169271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 05:11:59.515858  169271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 05:11:59.515932  169271 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 05:11:59.515941  169271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 05:11:59.515967  169271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 05:11:59.516031  169271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-678139 san=[127.0.0.1 192.168.50.182 kubernetes-upgrade-678139 localhost minikube]
	I0719 05:11:59.590577  169271 provision.go:177] copyRemoteCerts
	I0719 05:11:59.590649  169271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:11:59.590682  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:11:59.593199  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.593543  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:11:59.593577  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.593734  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:11:59.593936  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.594148  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:11:59.594272  169271 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa Username:docker}
	I0719 05:11:59.676363  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0719 05:11:59.700002  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 05:11:59.724749  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 05:11:59.748836  169271 provision.go:87] duration metric: took 239.228432ms to configureAuth
	I0719 05:11:59.748864  169271 buildroot.go:189] setting minikube options for container-runtime
	I0719 05:11:59.749043  169271 config.go:182] Loaded profile config "kubernetes-upgrade-678139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 05:11:59.749170  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:11:59.751838  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.752137  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:11:59.752167  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:11:59.752323  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:11:59.752496  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.752695  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:11:59.752847  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:11:59.752993  169271 main.go:141] libmachine: Using SSH client type: native
	I0719 05:11:59.753195  169271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:11:59.753225  169271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 05:12:00.026269  169271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 05:12:00.026297  169271 main.go:141] libmachine: Checking connection to Docker...
	I0719 05:12:00.026309  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetURL
	I0719 05:12:00.027688  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | Using libvirt version 6000000
	I0719 05:12:00.029882  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.030217  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:12:00.030251  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.030389  169271 main.go:141] libmachine: Docker is up and running!
	I0719 05:12:00.030402  169271 main.go:141] libmachine: Reticulating splines...
	I0719 05:12:00.030423  169271 client.go:171] duration metric: took 26.585504135s to LocalClient.Create
	I0719 05:12:00.030450  169271 start.go:167] duration metric: took 26.585580813s to libmachine.API.Create "kubernetes-upgrade-678139"
	I0719 05:12:00.030459  169271 start.go:293] postStartSetup for "kubernetes-upgrade-678139" (driver="kvm2")
	I0719 05:12:00.030471  169271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:12:00.030492  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:12:00.030748  169271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:12:00.030775  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:12:00.032959  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.033278  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:12:00.033307  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.033567  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:12:00.033739  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:12:00.033914  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:12:00.034048  169271 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa Username:docker}
	I0719 05:12:00.115214  169271 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:12:00.119172  169271 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 05:12:00.119200  169271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 05:12:00.119264  169271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 05:12:00.119359  169271 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 05:12:00.119500  169271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:12:00.128371  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 05:12:00.156598  169271 start.go:296] duration metric: took 126.121254ms for postStartSetup
	I0719 05:12:00.156654  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetConfigRaw
	I0719 05:12:00.157285  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetIP
	I0719 05:12:00.160178  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.160545  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:12:00.160576  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.160861  169271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/config.json ...
	I0719 05:12:00.161131  169271 start.go:128] duration metric: took 26.738800082s to createHost
	I0719 05:12:00.161161  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:12:00.163252  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.163612  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:12:00.163645  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.163766  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:12:00.163965  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:12:00.164163  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:12:00.164304  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:12:00.164477  169271 main.go:141] libmachine: Using SSH client type: native
	I0719 05:12:00.164697  169271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:12:00.164716  169271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 05:12:00.273585  169271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721365920.240321804
	
	I0719 05:12:00.273610  169271 fix.go:216] guest clock: 1721365920.240321804
	I0719 05:12:00.273619  169271 fix.go:229] Guest: 2024-07-19 05:12:00.240321804 +0000 UTC Remote: 2024-07-19 05:12:00.161146127 +0000 UTC m=+51.308072071 (delta=79.175677ms)
	I0719 05:12:00.273646  169271 fix.go:200] guest clock delta is within tolerance: 79.175677ms
	I0719 05:12:00.273655  169271 start.go:83] releasing machines lock for "kubernetes-upgrade-678139", held for 26.85149371s
	I0719 05:12:00.273693  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:12:00.273981  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetIP
	I0719 05:12:00.276675  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.277031  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:12:00.277076  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.277259  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:12:00.277733  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:12:00.277945  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:12:00.278027  169271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 05:12:00.278075  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:12:00.278180  169271 ssh_runner.go:195] Run: cat /version.json
	I0719 05:12:00.278205  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:12:00.280640  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.280991  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:12:00.281016  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.281036  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.281198  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:12:00.281377  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:12:00.281476  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:12:00.281507  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:00.281547  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:12:00.281699  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:12:00.281715  169271 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa Username:docker}
	I0719 05:12:00.281845  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:12:00.282003  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:12:00.282162  169271 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa Username:docker}
	I0719 05:12:00.405912  169271 ssh_runner.go:195] Run: systemctl --version
	I0719 05:12:00.412068  169271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 05:12:00.576351  169271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 05:12:00.583104  169271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:12:00.583176  169271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:12:00.599166  169271 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:12:00.599194  169271 start.go:495] detecting cgroup driver to use...
	I0719 05:12:00.599264  169271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:12:00.618529  169271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:12:00.635740  169271 docker.go:217] disabling cri-docker service (if available) ...
	I0719 05:12:00.635810  169271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 05:12:00.648956  169271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 05:12:00.662179  169271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 05:12:00.812699  169271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 05:12:00.977765  169271 docker.go:233] disabling docker service ...
	I0719 05:12:00.977846  169271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 05:12:00.991986  169271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 05:12:01.004233  169271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 05:12:01.127487  169271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 05:12:01.249861  169271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 05:12:01.264004  169271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:12:01.282675  169271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 05:12:01.282743  169271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:12:01.292836  169271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 05:12:01.292914  169271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:12:01.303084  169271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:12:01.313135  169271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:12:01.322753  169271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:12:01.332901  169271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:12:01.341559  169271 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 05:12:01.341634  169271 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 05:12:01.358131  169271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:12:01.372067  169271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:12:01.492560  169271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 05:12:01.655074  169271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 05:12:01.655156  169271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 05:12:01.660624  169271 start.go:563] Will wait 60s for crictl version
	I0719 05:12:01.660683  169271 ssh_runner.go:195] Run: which crictl
	I0719 05:12:01.665343  169271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:12:01.706088  169271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 05:12:01.706184  169271 ssh_runner.go:195] Run: crio --version
	I0719 05:12:01.735071  169271 ssh_runner.go:195] Run: crio --version
	I0719 05:12:01.765433  169271 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 05:12:01.766783  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetIP
	I0719 05:12:01.769432  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:01.769897  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:11:47 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:12:01.769929  169271 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:12:01.770208  169271 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 05:12:01.774677  169271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:12:01.789443  169271 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-678139 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-678139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 05:12:01.789583  169271 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 05:12:01.789641  169271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 05:12:01.824520  169271 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 05:12:01.824595  169271 ssh_runner.go:195] Run: which lz4
	I0719 05:12:01.828334  169271 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0719 05:12:01.832259  169271 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 05:12:01.832297  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 05:12:03.388820  169271 crio.go:462] duration metric: took 1.560521893s to copy over tarball
	I0719 05:12:03.388923  169271 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 05:12:06.215196  169271 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.826223833s)
	I0719 05:12:06.215236  169271 crio.go:469] duration metric: took 2.826367255s to extract the tarball
	I0719 05:12:06.215245  169271 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 05:12:06.260374  169271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 05:12:06.311537  169271 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 05:12:06.311565  169271 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 05:12:06.311646  169271 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:12:06.311666  169271 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 05:12:06.311682  169271 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 05:12:06.311711  169271 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:12:06.311722  169271 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:12:06.311651  169271 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:12:06.311735  169271 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 05:12:06.311736  169271 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:12:06.313612  169271 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:12:06.313639  169271 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:12:06.313657  169271 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:12:06.313678  169271 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 05:12:06.313611  169271 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 05:12:06.313612  169271 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:12:06.313613  169271 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:12:06.313615  169271 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 05:12:06.513236  169271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 05:12:06.555098  169271 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 05:12:06.555151  169271 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 05:12:06.555198  169271 ssh_runner.go:195] Run: which crictl
	I0719 05:12:06.558938  169271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 05:12:06.571103  169271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:12:06.576047  169271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:12:06.590196  169271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 05:12:06.592907  169271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 05:12:06.599377  169271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:12:06.616755  169271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:12:06.658385  169271 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 05:12:06.692705  169271 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 05:12:06.692759  169271 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:12:06.692820  169271 ssh_runner.go:195] Run: which crictl
	I0719 05:12:06.748026  169271 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 05:12:06.748076  169271 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 05:12:06.748126  169271 ssh_runner.go:195] Run: which crictl
	I0719 05:12:06.748150  169271 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 05:12:06.748186  169271 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:12:06.748238  169271 ssh_runner.go:195] Run: which crictl
	I0719 05:12:06.758685  169271 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 05:12:06.758744  169271 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 05:12:06.758799  169271 ssh_runner.go:195] Run: which crictl
	I0719 05:12:06.771500  169271 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 05:12:06.771555  169271 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:12:06.771607  169271 ssh_runner.go:195] Run: which crictl
	I0719 05:12:06.776092  169271 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 05:12:06.776134  169271 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:12:06.776164  169271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:12:06.776173  169271 ssh_runner.go:195] Run: which crictl
	I0719 05:12:06.776185  169271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:12:06.776199  169271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 05:12:06.776262  169271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 05:12:06.779943  169271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:12:06.883049  169271 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 05:12:06.883114  169271 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:12:06.883177  169271 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 05:12:06.883236  169271 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 05:12:06.883276  169271 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 05:12:06.890815  169271 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 05:12:06.917660  169271 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 05:12:07.157374  169271 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:12:07.301143  169271 cache_images.go:92] duration metric: took 989.560081ms to LoadCachedImages
	W0719 05:12:07.301235  169271 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0719 05:12:07.301251  169271 kubeadm.go:934] updating node { 192.168.50.182 8443 v1.20.0 crio true true} ...
	I0719 05:12:07.301383  169271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-678139 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-678139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:12:07.301507  169271 ssh_runner.go:195] Run: crio config
	I0719 05:12:07.352091  169271 cni.go:84] Creating CNI manager for ""
	I0719 05:12:07.352118  169271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 05:12:07.352131  169271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 05:12:07.352167  169271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.182 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-678139 NodeName:kubernetes-upgrade-678139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 05:12:07.352369  169271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-678139"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 05:12:07.352446  169271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 05:12:07.362128  169271 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 05:12:07.362193  169271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 05:12:07.371585  169271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0719 05:12:07.387283  169271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 05:12:07.405031  169271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0719 05:12:07.422525  169271 ssh_runner.go:195] Run: grep 192.168.50.182	control-plane.minikube.internal$ /etc/hosts
	I0719 05:12:07.426295  169271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:12:07.437694  169271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:12:07.574235  169271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:12:07.591521  169271 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139 for IP: 192.168.50.182
	I0719 05:12:07.591547  169271 certs.go:194] generating shared ca certs ...
	I0719 05:12:07.591564  169271 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:12:07.591752  169271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 05:12:07.591791  169271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 05:12:07.591801  169271 certs.go:256] generating profile certs ...
	I0719 05:12:07.591856  169271 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/client.key
	I0719 05:12:07.591876  169271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/client.crt with IP's: []
	I0719 05:12:07.783536  169271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/client.crt ...
	I0719 05:12:07.783564  169271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/client.crt: {Name:mk19f6fadb08edcfa049b70c001855d2b898af47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:12:07.783761  169271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/client.key ...
	I0719 05:12:07.783777  169271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/client.key: {Name:mk1bf69c7d017097f033a059c64277c1503133f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:12:07.783878  169271 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.key.7e935d4b
	I0719 05:12:07.783902  169271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.crt.7e935d4b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.182]
	I0719 05:12:07.957256  169271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.crt.7e935d4b ...
	I0719 05:12:07.957292  169271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.crt.7e935d4b: {Name:mkf9391e6321d5d87af5f916ddde4d6b92f7ab82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:12:07.957510  169271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.key.7e935d4b ...
	I0719 05:12:07.957537  169271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.key.7e935d4b: {Name:mkedc85559ad0d11529fed678fed99bffd108e39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:12:07.957646  169271 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.crt.7e935d4b -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.crt
	I0719 05:12:07.957721  169271 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.key.7e935d4b -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.key
	I0719 05:12:07.957778  169271 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.key
	I0719 05:12:07.957792  169271 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.crt with IP's: []
	I0719 05:12:08.097289  169271 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.crt ...
	I0719 05:12:08.097321  169271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.crt: {Name:mkc8bf0b2ba6208fc152677a496a0b0a84701a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:12:08.097496  169271 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.key ...
	I0719 05:12:08.097511  169271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.key: {Name:mk67b2fbc59dfab27f4c1d1f294671361a65203d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:12:08.097676  169271 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 05:12:08.097711  169271 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 05:12:08.097722  169271 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 05:12:08.097742  169271 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 05:12:08.097764  169271 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 05:12:08.097783  169271 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 05:12:08.097817  169271 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 05:12:08.098423  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:12:08.126115  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 05:12:08.152636  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:12:08.179096  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:12:08.204562  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 05:12:08.229368  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 05:12:08.257701  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 05:12:08.285015  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 05:12:08.311008  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:12:08.338002  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 05:12:08.365377  169271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 05:12:08.388622  169271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 05:12:08.405303  169271 ssh_runner.go:195] Run: openssl version
	I0719 05:12:08.411548  169271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 05:12:08.422125  169271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 05:12:08.426496  169271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 05:12:08.426555  169271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 05:12:08.432287  169271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:12:08.442524  169271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:12:08.452875  169271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:12:08.457023  169271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:12:08.457105  169271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:12:08.462612  169271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:12:08.475605  169271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 05:12:08.487749  169271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 05:12:08.492099  169271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 05:12:08.492164  169271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 05:12:08.497611  169271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 05:12:08.508348  169271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:12:08.514700  169271 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 05:12:08.514764  169271 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-678139 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-678139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:12:08.514860  169271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 05:12:08.514921  169271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 05:12:08.569506  169271 cri.go:89] found id: ""
	I0719 05:12:08.569576  169271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 05:12:08.578995  169271 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 05:12:08.588073  169271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 05:12:08.596906  169271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:12:08.596923  169271 kubeadm.go:157] found existing configuration files:
	
	I0719 05:12:08.596958  169271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 05:12:08.604752  169271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:12:08.604810  169271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 05:12:08.613119  169271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 05:12:08.621766  169271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:12:08.621851  169271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 05:12:08.630504  169271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 05:12:08.639106  169271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:12:08.639164  169271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 05:12:08.647832  169271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 05:12:08.656047  169271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:12:08.656103  169271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 05:12:08.664532  169271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 05:12:08.964468  169271 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 05:14:07.417897  169271 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 05:14:07.418008  169271 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 05:14:07.420037  169271 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 05:14:07.420119  169271 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 05:14:07.420207  169271 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 05:14:07.420319  169271 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 05:14:07.420452  169271 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 05:14:07.420538  169271 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 05:14:07.488911  169271 out.go:204]   - Generating certificates and keys ...
	I0719 05:14:07.489101  169271 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 05:14:07.489190  169271 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 05:14:07.489315  169271 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 05:14:07.489416  169271 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 05:14:07.489516  169271 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 05:14:07.489587  169271 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 05:14:07.489664  169271 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 05:14:07.489856  169271 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-678139 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	I0719 05:14:07.489930  169271 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 05:14:07.490121  169271 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-678139 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	I0719 05:14:07.490218  169271 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 05:14:07.490317  169271 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 05:14:07.490385  169271 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 05:14:07.490466  169271 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 05:14:07.490537  169271 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 05:14:07.490611  169271 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 05:14:07.490714  169271 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 05:14:07.490802  169271 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 05:14:07.490929  169271 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 05:14:07.491033  169271 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 05:14:07.491084  169271 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 05:14:07.491170  169271 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 05:14:07.493951  169271 out.go:204]   - Booting up control plane ...
	I0719 05:14:07.494069  169271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 05:14:07.494176  169271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 05:14:07.494240  169271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 05:14:07.494320  169271 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 05:14:07.494540  169271 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 05:14:07.494611  169271 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 05:14:07.494697  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:14:07.494988  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:14:07.495101  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:14:07.495341  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:14:07.495447  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:14:07.495701  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:14:07.495800  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:14:07.496054  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:14:07.496141  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:14:07.496327  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:14:07.496346  169271 kubeadm.go:310] 
	I0719 05:14:07.496389  169271 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 05:14:07.496424  169271 kubeadm.go:310] 		timed out waiting for the condition
	I0719 05:14:07.496433  169271 kubeadm.go:310] 
	I0719 05:14:07.496472  169271 kubeadm.go:310] 	This error is likely caused by:
	I0719 05:14:07.496501  169271 kubeadm.go:310] 		- The kubelet is not running
	I0719 05:14:07.496628  169271 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 05:14:07.496636  169271 kubeadm.go:310] 
	I0719 05:14:07.496718  169271 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 05:14:07.496764  169271 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 05:14:07.496813  169271 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 05:14:07.496822  169271 kubeadm.go:310] 
	I0719 05:14:07.496971  169271 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 05:14:07.497111  169271 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 05:14:07.497121  169271 kubeadm.go:310] 
	I0719 05:14:07.497210  169271 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 05:14:07.497316  169271 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 05:14:07.497436  169271 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 05:14:07.497526  169271 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 05:14:07.497585  169271 kubeadm.go:310] 
	W0719 05:14:07.497676  169271 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-678139 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-678139 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-678139 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-678139 localhost] and IPs [192.168.50.182 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0719 05:14:07.497719  169271 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0719 05:14:07.959411  169271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:14:07.975417  169271 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 05:14:07.985033  169271 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:14:07.985077  169271 kubeadm.go:157] found existing configuration files:
	
	I0719 05:14:07.985135  169271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 05:14:07.994208  169271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:14:07.994267  169271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 05:14:08.003754  169271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 05:14:08.012670  169271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:14:08.012729  169271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 05:14:08.021790  169271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 05:14:08.030314  169271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:14:08.030373  169271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 05:14:08.042096  169271 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 05:14:08.053001  169271 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:14:08.053092  169271 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 05:14:08.064916  169271 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 05:14:08.139318  169271 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0719 05:14:08.139391  169271 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 05:14:08.293937  169271 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 05:14:08.294227  169271 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 05:14:08.294435  169271 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 05:14:08.483013  169271 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 05:14:08.484980  169271 out.go:204]   - Generating certificates and keys ...
	I0719 05:14:08.485093  169271 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 05:14:08.485186  169271 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 05:14:08.485293  169271 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 05:14:08.485380  169271 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0719 05:14:08.485537  169271 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0719 05:14:08.485625  169271 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0719 05:14:08.486172  169271 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0719 05:14:08.486626  169271 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0719 05:14:08.487059  169271 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 05:14:08.487647  169271 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 05:14:08.487738  169271 kubeadm.go:310] [certs] Using the existing "sa" key
	I0719 05:14:08.487852  169271 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 05:14:08.594377  169271 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 05:14:08.708227  169271 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 05:14:08.854720  169271 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 05:14:08.991487  169271 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 05:14:09.019071  169271 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 05:14:09.020697  169271 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 05:14:09.020770  169271 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 05:14:09.184729  169271 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 05:14:09.186701  169271 out.go:204]   - Booting up control plane ...
	I0719 05:14:09.186818  169271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 05:14:09.194975  169271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 05:14:09.196598  169271 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 05:14:09.197796  169271 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 05:14:09.211900  169271 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 05:14:49.211905  169271 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0719 05:14:49.211989  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:14:49.212163  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:14:54.211841  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:14:54.212136  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:15:04.212269  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:15:04.212545  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:15:24.212924  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:15:24.213182  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:16:04.214716  169271 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0719 05:16:04.214968  169271 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0719 05:16:04.214988  169271 kubeadm.go:310] 
	I0719 05:16:04.215049  169271 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0719 05:16:04.215085  169271 kubeadm.go:310] 		timed out waiting for the condition
	I0719 05:16:04.215092  169271 kubeadm.go:310] 
	I0719 05:16:04.215126  169271 kubeadm.go:310] 	This error is likely caused by:
	I0719 05:16:04.215156  169271 kubeadm.go:310] 		- The kubelet is not running
	I0719 05:16:04.215248  169271 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0719 05:16:04.215259  169271 kubeadm.go:310] 
	I0719 05:16:04.215356  169271 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0719 05:16:04.215389  169271 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0719 05:16:04.215428  169271 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0719 05:16:04.215459  169271 kubeadm.go:310] 
	I0719 05:16:04.215568  169271 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0719 05:16:04.215645  169271 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0719 05:16:04.215651  169271 kubeadm.go:310] 
	I0719 05:16:04.215751  169271 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0719 05:16:04.215826  169271 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0719 05:16:04.215896  169271 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0719 05:16:04.215980  169271 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0719 05:16:04.215994  169271 kubeadm.go:310] 
	I0719 05:16:04.216895  169271 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 05:16:04.217020  169271 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0719 05:16:04.217144  169271 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0719 05:16:04.217243  169271 kubeadm.go:394] duration metric: took 3m55.702484435s to StartCluster
	I0719 05:16:04.217296  169271 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0719 05:16:04.217363  169271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0719 05:16:04.262978  169271 cri.go:89] found id: ""
	I0719 05:16:04.263010  169271 logs.go:276] 0 containers: []
	W0719 05:16:04.263020  169271 logs.go:278] No container was found matching "kube-apiserver"
	I0719 05:16:04.263029  169271 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0719 05:16:04.263097  169271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0719 05:16:04.307693  169271 cri.go:89] found id: ""
	I0719 05:16:04.307731  169271 logs.go:276] 0 containers: []
	W0719 05:16:04.307743  169271 logs.go:278] No container was found matching "etcd"
	I0719 05:16:04.307752  169271 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0719 05:16:04.307816  169271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0719 05:16:04.346874  169271 cri.go:89] found id: ""
	I0719 05:16:04.346903  169271 logs.go:276] 0 containers: []
	W0719 05:16:04.346913  169271 logs.go:278] No container was found matching "coredns"
	I0719 05:16:04.346922  169271 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0719 05:16:04.346990  169271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0719 05:16:04.387466  169271 cri.go:89] found id: ""
	I0719 05:16:04.387498  169271 logs.go:276] 0 containers: []
	W0719 05:16:04.387509  169271 logs.go:278] No container was found matching "kube-scheduler"
	I0719 05:16:04.387517  169271 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0719 05:16:04.387597  169271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0719 05:16:04.428795  169271 cri.go:89] found id: ""
	I0719 05:16:04.428822  169271 logs.go:276] 0 containers: []
	W0719 05:16:04.428830  169271 logs.go:278] No container was found matching "kube-proxy"
	I0719 05:16:04.428837  169271 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0719 05:16:04.428889  169271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0719 05:16:04.466998  169271 cri.go:89] found id: ""
	I0719 05:16:04.467030  169271 logs.go:276] 0 containers: []
	W0719 05:16:04.467040  169271 logs.go:278] No container was found matching "kube-controller-manager"
	I0719 05:16:04.467048  169271 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0719 05:16:04.467104  169271 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0719 05:16:04.505385  169271 cri.go:89] found id: ""
	I0719 05:16:04.505417  169271 logs.go:276] 0 containers: []
	W0719 05:16:04.505426  169271 logs.go:278] No container was found matching "kindnet"
	I0719 05:16:04.505436  169271 logs.go:123] Gathering logs for describe nodes ...
	I0719 05:16:04.505449  169271 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0719 05:16:04.623211  169271 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0719 05:16:04.623239  169271 logs.go:123] Gathering logs for CRI-O ...
	I0719 05:16:04.623255  169271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0719 05:16:04.716839  169271 logs.go:123] Gathering logs for container status ...
	I0719 05:16:04.716886  169271 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0719 05:16:04.756858  169271 logs.go:123] Gathering logs for kubelet ...
	I0719 05:16:04.756894  169271 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0719 05:16:04.812698  169271 logs.go:123] Gathering logs for dmesg ...
	I0719 05:16:04.812749  169271 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0719 05:16:04.825642  169271 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0719 05:16:04.825692  169271 out.go:239] * 
	* 
	W0719 05:16:04.825752  169271 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 05:16:04.825772  169271 out.go:239] * 
	* 
	W0719 05:16:04.826636  169271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 05:16:04.829857  169271 out.go:177] 
	W0719 05:16:04.831232  169271 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0719 05:16:04.831308  169271 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0719 05:16:04.831340  169271 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0719 05:16:04.832888  169271 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-678139 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-678139
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-678139: (1.326701281s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-678139 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-678139 status --format={{.Host}}: exit status 7 (64.545312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-678139 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0719 05:16:19.881237  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-678139 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.391885291s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-678139 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-678139 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-678139 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (107.592498ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-678139] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-678139
	    minikube start -p kubernetes-upgrade-678139 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6781392 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-678139 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-678139 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-678139 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.808049892s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-19 05:18:00.673020701 +0000 UTC m=+6045.827806294
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-678139 -n kubernetes-upgrade-678139
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-678139 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-678139 logs -n 25: (1.568732523s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-186540 sudo cat             | cilium-186540             | jenkins | v1.33.1 | 19 Jul 24 05:15 UTC |                     |
	|         | /etc/containerd/config.toml           |                           |         |         |                     |                     |
	| ssh     | -p cilium-186540 sudo                 | cilium-186540             | jenkins | v1.33.1 | 19 Jul 24 05:15 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-186540 sudo                 | cilium-186540             | jenkins | v1.33.1 | 19 Jul 24 05:15 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-186540 sudo                 | cilium-186540             | jenkins | v1.33.1 | 19 Jul 24 05:15 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-186540 sudo find            | cilium-186540             | jenkins | v1.33.1 | 19 Jul 24 05:15 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-186540 sudo crio            | cilium-186540             | jenkins | v1.33.1 | 19 Jul 24 05:15 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-186540                      | cilium-186540             | jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:15 UTC |
	| delete  | -p force-systemd-env-298141           | force-systemd-env-298141  | jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:15 UTC |
	| start   | -p force-systemd-flag-670923          | force-systemd-flag-670923 | jenkins | v1.33.1 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:16 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-215036             | minikube                  | jenkins | v1.26.0 | 19 Jul 24 05:15 UTC | 19 Jul 24 05:16 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-670923 ssh cat     | force-systemd-flag-670923 | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-670923          | force-systemd-flag-670923 | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	| stop    | -p kubernetes-upgrade-678139          | kubernetes-upgrade-678139 | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	| start   | -p cert-options-423966                | cert-options-423966       | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-678139          | kubernetes-upgrade-678139 | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:17 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-215036 stop           | minikube                  | jenkins | v1.26.0 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	| start   | -p stopped-upgrade-215036             | stopped-upgrade-215036    | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:17 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-423966 ssh               | cert-options-423966       | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-423966 -- sudo        | cert-options-423966       | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-423966                | cert-options-423966       | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC | 19 Jul 24 05:16 UTC |
	| start   | -p old-k8s-version-901291             | old-k8s-version-901291    | jenkins | v1.33.1 | 19 Jul 24 05:16 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-678139          | kubernetes-upgrade-678139 | jenkins | v1.33.1 | 19 Jul 24 05:17 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-678139          | kubernetes-upgrade-678139 | jenkins | v1.33.1 | 19 Jul 24 05:17 UTC | 19 Jul 24 05:18 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-215036             | stopped-upgrade-215036    | jenkins | v1.33.1 | 19 Jul 24 05:17 UTC | 19 Jul 24 05:17 UTC |
	| start   | -p no-preload-783098 --memory=2200    | no-preload-783098         | jenkins | v1.33.1 | 19 Jul 24 05:17 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 05:17:36
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 05:17:36.500486  177311 out.go:291] Setting OutFile to fd 1 ...
	I0719 05:17:36.500640  177311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:17:36.500652  177311 out.go:304] Setting ErrFile to fd 2...
	I0719 05:17:36.500658  177311 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:17:36.500941  177311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 05:17:36.501842  177311 out.go:298] Setting JSON to false
	I0719 05:17:36.503321  177311 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":10799,"bootTime":1721355457,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 05:17:36.503425  177311 start.go:139] virtualization: kvm guest
	I0719 05:17:36.505737  177311 out.go:177] * [no-preload-783098] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 05:17:36.507068  177311 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:17:36.507126  177311 notify.go:220] Checking for updates...
	I0719 05:17:36.509350  177311 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:17:36.510580  177311 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 05:17:36.511784  177311 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 05:17:36.512906  177311 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 05:17:36.514055  177311 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:17:36.515747  177311 config.go:182] Loaded profile config "cert-expiration-655634": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 05:17:36.516054  177311 config.go:182] Loaded profile config "kubernetes-upgrade-678139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 05:17:36.516246  177311 config.go:182] Loaded profile config "old-k8s-version-901291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0719 05:17:36.516374  177311 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:17:36.563034  177311 out.go:177] * Using the kvm2 driver based on user configuration
	I0719 05:17:36.564340  177311 start.go:297] selected driver: kvm2
	I0719 05:17:36.564363  177311 start.go:901] validating driver "kvm2" against <nil>
	I0719 05:17:36.564378  177311 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:17:36.565058  177311 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.565217  177311 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 05:17:36.581364  177311 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 05:17:36.581423  177311 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 05:17:36.581654  177311 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:17:36.581688  177311 cni.go:84] Creating CNI manager for ""
	I0719 05:17:36.581700  177311 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 05:17:36.581710  177311 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 05:17:36.581774  177311 start.go:340] cluster config:
	{Name:no-preload-783098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-783098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:17:36.581892  177311 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.583792  177311 out.go:177] * Starting "no-preload-783098" primary control-plane node in "no-preload-783098" cluster
	I0719 05:17:35.479941  176647 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 05:17:35.653922  176647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 05:17:35.661390  176647 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:17:35.661460  176647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:17:35.679180  176647 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:17:35.679205  176647 start.go:495] detecting cgroup driver to use...
	I0719 05:17:35.679285  176647 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:17:35.697643  176647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:17:35.712976  176647 docker.go:217] disabling cri-docker service (if available) ...
	I0719 05:17:35.713039  176647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 05:17:35.727683  176647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 05:17:35.741806  176647 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 05:17:35.889053  176647 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 05:17:36.058434  176647 docker.go:233] disabling docker service ...
	I0719 05:17:36.058502  176647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 05:17:36.073850  176647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 05:17:36.086525  176647 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 05:17:36.220878  176647 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 05:17:36.352461  176647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 05:17:36.368304  176647 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:17:36.388108  176647 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0719 05:17:36.388174  176647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:36.398644  176647 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 05:17:36.398715  176647 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:36.408897  176647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:36.418938  176647 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:36.429104  176647 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:17:36.439714  176647 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:17:36.450339  176647 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0719 05:17:36.450402  176647 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0719 05:17:36.463759  176647 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:17:36.473743  176647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:17:36.607307  176647 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 05:17:36.755951  176647 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 05:17:36.756034  176647 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 05:17:36.760439  176647 start.go:563] Will wait 60s for crictl version
	I0719 05:17:36.760494  176647 ssh_runner.go:195] Run: which crictl
	I0719 05:17:36.764049  176647 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:17:36.808100  176647 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 05:17:36.808202  176647 ssh_runner.go:195] Run: crio --version
	I0719 05:17:36.835004  176647 ssh_runner.go:195] Run: crio --version
	I0719 05:17:36.874124  176647 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0719 05:17:35.371702  176994 machine.go:94] provisionDockerMachine start ...
	I0719 05:17:35.371730  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:17:35.371991  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:35.375692  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.376135  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:35.376173  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.376355  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:17:35.376520  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:35.376631  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:35.376738  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:17:35.376937  176994 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:35.377212  176994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:17:35.377236  176994 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:17:35.489893  176994 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-678139
	
	I0719 05:17:35.489928  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetMachineName
	I0719 05:17:35.490174  176994 buildroot.go:166] provisioning hostname "kubernetes-upgrade-678139"
	I0719 05:17:35.490200  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetMachineName
	I0719 05:17:35.490408  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:35.493676  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.494093  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:35.494126  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.494315  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:17:35.494591  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:35.494782  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:35.494901  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:17:35.495151  176994 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:35.495334  176994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:17:35.495355  176994 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-678139 && echo "kubernetes-upgrade-678139" | sudo tee /etc/hostname
	I0719 05:17:35.634048  176994 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-678139
	
	I0719 05:17:35.634080  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:35.637151  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.637732  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:35.637766  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.638022  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:17:35.638250  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:35.638444  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:35.638622  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:17:35.638799  176994 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:35.639008  176994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:17:35.639029  176994 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-678139' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-678139/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-678139' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:17:35.759490  176994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:17:35.759524  176994 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19302-122995/.minikube CaCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19302-122995/.minikube}
	I0719 05:17:35.759570  176994 buildroot.go:174] setting up certificates
	I0719 05:17:35.759587  176994 provision.go:84] configureAuth start
	I0719 05:17:35.759604  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetMachineName
	I0719 05:17:35.759904  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetIP
	I0719 05:17:35.763298  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.763811  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:35.763859  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.764007  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:35.766745  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.767176  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:35.767212  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:35.767373  176994 provision.go:143] copyHostCerts
	I0719 05:17:35.767455  176994 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem, removing ...
	I0719 05:17:35.767469  176994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem
	I0719 05:17:35.767543  176994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/ca.pem (1082 bytes)
	I0719 05:17:35.767673  176994 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem, removing ...
	I0719 05:17:35.767686  176994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem
	I0719 05:17:35.767717  176994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/cert.pem (1123 bytes)
	I0719 05:17:35.767779  176994 exec_runner.go:144] found /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem, removing ...
	I0719 05:17:35.767786  176994 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem
	I0719 05:17:35.767805  176994 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19302-122995/.minikube/key.pem (1679 bytes)
	I0719 05:17:35.767855  176994 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-678139 san=[127.0.0.1 192.168.50.182 kubernetes-upgrade-678139 localhost minikube]
	I0719 05:17:36.165151  176994 provision.go:177] copyRemoteCerts
	I0719 05:17:36.165224  176994 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:17:36.165260  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:36.192798  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:36.193261  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:36.193294  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:36.193506  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:17:36.193744  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:36.193900  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:17:36.194084  176994 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa Username:docker}
	I0719 05:17:36.279350  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 05:17:36.307304  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0719 05:17:36.334291  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 05:17:36.377098  176994 provision.go:87] duration metric: took 617.492105ms to configureAuth
	I0719 05:17:36.377132  176994 buildroot.go:189] setting minikube options for container-runtime
	I0719 05:17:36.377346  176994 config.go:182] Loaded profile config "kubernetes-upgrade-678139": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0719 05:17:36.377436  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:36.380614  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:36.381059  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:36.381126  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:36.381290  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:17:36.381514  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:36.381704  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:36.381901  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:17:36.382107  176994 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:36.382270  176994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:17:36.382291  176994 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0719 05:17:36.875164  176647 main.go:141] libmachine: (old-k8s-version-901291) Calling .GetIP
	I0719 05:17:36.878272  176647 main.go:141] libmachine: (old-k8s-version-901291) DBG | domain old-k8s-version-901291 has defined MAC address 52:54:00:8f:0c:62 in network mk-old-k8s-version-901291
	I0719 05:17:36.878700  176647 main.go:141] libmachine: (old-k8s-version-901291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8f:0c:62", ip: ""} in network mk-old-k8s-version-901291: {Iface:virbr3 ExpiryTime:2024-07-19 06:17:25 +0000 UTC Type:0 Mac:52:54:00:8f:0c:62 Iaid: IPaddr:192.168.61.237 Prefix:24 Hostname:old-k8s-version-901291 Clientid:01:52:54:00:8f:0c:62}
	I0719 05:17:36.878730  176647 main.go:141] libmachine: (old-k8s-version-901291) DBG | domain old-k8s-version-901291 has defined IP address 192.168.61.237 and MAC address 52:54:00:8f:0c:62 in network mk-old-k8s-version-901291
	I0719 05:17:36.878954  176647 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0719 05:17:36.882971  176647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:17:36.895088  176647 kubeadm.go:883] updating cluster {Name:old-k8s-version-901291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-901291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.237 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 05:17:36.895230  176647 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 05:17:36.895289  176647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 05:17:36.931274  176647 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 05:17:36.931335  176647 ssh_runner.go:195] Run: which lz4
	I0719 05:17:36.936319  176647 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 05:17:36.941646  176647 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 05:17:36.941682  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0719 05:17:38.401653  176647 crio.go:462] duration metric: took 1.465388149s to copy over tarball
	I0719 05:17:38.401750  176647 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 05:17:36.585158  177311 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 05:17:36.585372  177311 profile.go:143] Saving config to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/no-preload-783098/config.json ...
	I0719 05:17:36.585385  177311 cache.go:107] acquiring lock: {Name:mk532a5d247304b5eaf0e6a6117f7d6ce3607aa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.585416  177311 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/no-preload-783098/config.json: {Name:mk656955d5d67e3aa0e31be673920b1fb41978b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:17:36.585411  177311 cache.go:107] acquiring lock: {Name:mk2af256580aefc81180a217f34f2de5170192e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.585457  177311 cache.go:107] acquiring lock: {Name:mk46e9ab01344c396255acad584a28f4e314415e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.585569  177311 cache.go:115] /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 05:17:36.585585  177311 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 128.966µs
	I0719 05:17:36.585596  177311 start.go:360] acquireMachinesLock for no-preload-783098: {Name:mkfbbe6ca8c44534b944b48224a0199ec825bc72 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 05:17:36.585599  177311 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 05:17:36.585610  177311 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 05:17:36.585627  177311 cache.go:107] acquiring lock: {Name:mk7b65f1aa7ed09376645b72db339e93d786fb97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.585726  177311 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 05:17:36.585845  177311 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0719 05:17:36.585886  177311 cache.go:107] acquiring lock: {Name:mk7587b9abeb71d77f0364b9576362c2be19ae5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.585931  177311 cache.go:107] acquiring lock: {Name:mk7749f1e2b15d56f137ca6428bde6a0edde2112 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.585994  177311 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 05:17:36.586016  177311 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0719 05:17:36.586143  177311 cache.go:107] acquiring lock: {Name:mk125e8f3c06226450726aab980b6e3dbd59321e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.586283  177311 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 05:17:36.586251  177311 cache.go:107] acquiring lock: {Name:mkbc9832f28206b719145ccaa11d1ab7df2413f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:17:36.586468  177311 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 05:17:36.588371  177311 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0719 05:17:36.588417  177311 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0719 05:17:36.588805  177311 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0719 05:17:36.588443  177311 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0719 05:17:36.589739  177311 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0719 05:17:36.590273  177311 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0719 05:17:36.590265  177311 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0719 05:17:37.185981  177311 cache.go:162] opening:  /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0719 05:17:37.202983  177311 cache.go:162] opening:  /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0719 05:17:37.204980  177311 cache.go:162] opening:  /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0719 05:17:37.225361  177311 cache.go:162] opening:  /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0719 05:17:37.228318  177311 cache.go:162] opening:  /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0719 05:17:37.230347  177311 cache.go:162] opening:  /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0719 05:17:37.286774  177311 cache.go:162] opening:  /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0719 05:17:37.399635  177311 cache.go:157] /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0719 05:17:37.399668  177311 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 814.29415ms
	I0719 05:17:37.399684  177311 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0719 05:17:37.949278  177311 cache.go:157] /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0719 05:17:37.949320  177311 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 1.363923074s
	I0719 05:17:37.949338  177311 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0719 05:17:38.891064  177311 cache.go:157] /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0719 05:17:38.891107  177311 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 2.305478348s
	I0719 05:17:38.891130  177311 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0719 05:17:39.048122  177311 cache.go:157] /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0719 05:17:39.048157  177311 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 2.462017105s
	I0719 05:17:39.048173  177311 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0719 05:17:39.379016  177311 cache.go:157] /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0719 05:17:39.379055  177311 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 2.793172971s
	I0719 05:17:39.379071  177311 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0719 05:17:39.425550  177311 cache.go:157] /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0719 05:17:39.425590  177311 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 2.839403931s
	I0719 05:17:39.425625  177311 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0719 05:17:39.850034  177311 cache.go:157] /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 exists
	I0719 05:17:39.850068  177311 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0" took 3.264140276s
	I0719 05:17:39.850082  177311 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0719 05:17:39.850097  177311 cache.go:87] Successfully saved all images to host disk.
	I0719 05:17:42.670085  177311 start.go:364] duration metric: took 6.084460602s to acquireMachinesLock for "no-preload-783098"
	I0719 05:17:42.670148  177311 start.go:93] Provisioning new machine with config: &{Name:no-preload-783098 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-783098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0719 05:17:42.670267  177311 start.go:125] createHost starting for "" (driver="kvm2")
	I0719 05:17:42.423394  176994 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0719 05:17:42.423427  176994 machine.go:97] duration metric: took 7.051704822s to provisionDockerMachine
	I0719 05:17:42.423443  176994 start.go:293] postStartSetup for "kubernetes-upgrade-678139" (driver="kvm2")
	I0719 05:17:42.423459  176994 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:17:42.423484  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:17:42.423848  176994 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:17:42.423875  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:42.427048  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.427472  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:42.427505  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.427678  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:17:42.427867  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:42.428088  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:17:42.428275  176994 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa Username:docker}
	I0719 05:17:42.512074  176994 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:17:42.516757  176994 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 05:17:42.516787  176994 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/addons for local assets ...
	I0719 05:17:42.516859  176994 filesync.go:126] Scanning /home/jenkins/minikube-integration/19302-122995/.minikube/files for local assets ...
	I0719 05:17:42.516976  176994 filesync.go:149] local asset: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem -> 1301702.pem in /etc/ssl/certs
	I0719 05:17:42.517143  176994 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:17:42.529237  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 05:17:42.554038  176994 start.go:296] duration metric: took 130.576668ms for postStartSetup
	I0719 05:17:42.554088  176994 fix.go:56] duration metric: took 7.208190447s for fixHost
	I0719 05:17:42.554117  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:42.557365  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.557798  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:42.557832  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.558082  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:17:42.558323  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:42.558544  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:42.558728  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:17:42.558946  176994 main.go:141] libmachine: Using SSH client type: native
	I0719 05:17:42.559154  176994 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.182 22 <nil> <nil>}
	I0719 05:17:42.559170  176994 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 05:17:42.669946  176994 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721366262.661074104
	
	I0719 05:17:42.669974  176994 fix.go:216] guest clock: 1721366262.661074104
	I0719 05:17:42.669981  176994 fix.go:229] Guest: 2024-07-19 05:17:42.661074104 +0000 UTC Remote: 2024-07-19 05:17:42.554093636 +0000 UTC m=+28.684701526 (delta=106.980468ms)
	I0719 05:17:42.670000  176994 fix.go:200] guest clock delta is within tolerance: 106.980468ms
	I0719 05:17:42.670005  176994 start.go:83] releasing machines lock for "kubernetes-upgrade-678139", held for 7.324141021s
	I0719 05:17:42.670027  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:17:42.670320  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetIP
	I0719 05:17:42.673099  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.673510  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:42.673541  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.673685  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:17:42.674222  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:17:42.674427  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .DriverName
	I0719 05:17:42.674510  176994 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 05:17:42.674568  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:42.674657  176994 ssh_runner.go:195] Run: cat /version.json
	I0719 05:17:42.674683  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHHostname
	I0719 05:17:42.677587  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.677820  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.677896  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:42.677922  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.678079  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:17:42.678213  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:42.678237  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:42.678241  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:42.678387  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHPort
	I0719 05:17:42.678395  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:17:42.678543  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHKeyPath
	I0719 05:17:42.678538  176994 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa Username:docker}
	I0719 05:17:42.678680  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetSSHUsername
	I0719 05:17:42.678834  176994 sshutil.go:53] new ssh client: &{IP:192.168.50.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/kubernetes-upgrade-678139/id_rsa Username:docker}
	I0719 05:17:42.786802  176994 ssh_runner.go:195] Run: systemctl --version
	I0719 05:17:42.794186  176994 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0719 05:17:42.958647  176994 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 05:17:42.970014  176994 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:17:42.970089  176994 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:17:42.983401  176994 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 05:17:42.983426  176994 start.go:495] detecting cgroup driver to use...
	I0719 05:17:42.983493  176994 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:17:43.007520  176994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:17:43.024261  176994 docker.go:217] disabling cri-docker service (if available) ...
	I0719 05:17:43.024355  176994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0719 05:17:43.041881  176994 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0719 05:17:43.056446  176994 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0719 05:17:43.205531  176994 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0719 05:17:43.353941  176994 docker.go:233] disabling docker service ...
	I0719 05:17:43.354007  176994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0719 05:17:43.372097  176994 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0719 05:17:43.386610  176994 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0719 05:17:43.552984  176994 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0719 05:17:43.716258  176994 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0719 05:17:43.752397  176994 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:17:43.800869  176994 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0719 05:17:43.800947  176994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:43.834388  176994 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0719 05:17:43.834480  176994 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:43.861435  176994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:43.876589  176994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:43.891838  176994 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:17:43.903522  176994 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:43.914514  176994 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:40.941182  176647 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.539394705s)
	I0719 05:17:40.941217  176647 crio.go:469] duration metric: took 2.539532639s to extract the tarball
	I0719 05:17:40.941228  176647 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 05:17:40.984111  176647 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 05:17:41.026576  176647 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0719 05:17:41.026602  176647 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 05:17:41.026664  176647 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:17:41.026682  176647 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:17:41.026713  176647 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:17:41.026726  176647 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:17:41.026750  176647 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0719 05:17:41.026770  176647 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0719 05:17:41.026933  176647 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:17:41.026944  176647 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 05:17:41.028013  176647 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0719 05:17:41.028231  176647 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0719 05:17:41.028268  176647 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:17:41.028446  176647 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:17:41.028600  176647 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:17:41.028602  176647 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:17:41.028612  176647 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 05:17:41.028614  176647 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:17:41.249933  176647 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0719 05:17:41.250794  176647 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:17:41.264495  176647 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:17:41.269675  176647 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 05:17:41.270554  176647 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0719 05:17:41.278124  176647 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:17:41.287187  176647 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:17:41.345787  176647 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0719 05:17:41.345852  176647 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0719 05:17:41.345908  176647 ssh_runner.go:195] Run: which crictl
	I0719 05:17:41.366453  176647 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0719 05:17:41.366509  176647 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:17:41.366549  176647 ssh_runner.go:195] Run: which crictl
	I0719 05:17:41.399949  176647 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0719 05:17:41.400006  176647 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:17:41.400051  176647 ssh_runner.go:195] Run: which crictl
	I0719 05:17:41.425786  176647 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0719 05:17:41.425840  176647 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0719 05:17:41.425799  176647 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0719 05:17:41.425858  176647 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0719 05:17:41.425878  176647 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0719 05:17:41.425889  176647 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:17:41.425891  176647 ssh_runner.go:195] Run: which crictl
	I0719 05:17:41.425924  176647 ssh_runner.go:195] Run: which crictl
	I0719 05:17:41.425933  176647 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0719 05:17:41.425955  176647 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:17:41.425978  176647 ssh_runner.go:195] Run: which crictl
	I0719 05:17:41.425987  176647 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0719 05:17:41.425924  176647 ssh_runner.go:195] Run: which crictl
	I0719 05:17:41.426037  176647 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0719 05:17:41.426064  176647 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0719 05:17:41.497723  176647 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0719 05:17:41.497803  176647 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0719 05:17:41.497984  176647 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0719 05:17:41.498011  176647 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0719 05:17:41.498074  176647 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0719 05:17:41.498129  176647 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0719 05:17:41.498173  176647 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0719 05:17:41.540688  176647 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0719 05:17:41.585975  176647 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0719 05:17:41.585987  176647 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0719 05:17:41.586041  176647 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0719 05:17:41.922951  176647 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:17:42.072105  176647 cache_images.go:92] duration metric: took 1.045486673s to LoadCachedImages
	W0719 05:17:42.072222  176647 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19302-122995/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0719 05:17:42.072239  176647 kubeadm.go:934] updating node { 192.168.61.237 8443 v1.20.0 crio true true} ...
	I0719 05:17:42.072381  176647 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-901291 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-901291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:17:42.072467  176647 ssh_runner.go:195] Run: crio config
	I0719 05:17:42.129194  176647 cni.go:84] Creating CNI manager for ""
	I0719 05:17:42.129222  176647 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 05:17:42.129234  176647 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 05:17:42.129265  176647 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.237 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-901291 NodeName:old-k8s-version-901291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 05:17:42.129479  176647 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-901291"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 05:17:42.129570  176647 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0719 05:17:42.139946  176647 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 05:17:42.140027  176647 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 05:17:42.150733  176647 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0719 05:17:42.169114  176647 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 05:17:42.187085  176647 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0719 05:17:42.203059  176647 ssh_runner.go:195] Run: grep 192.168.61.237	control-plane.minikube.internal$ /etc/hosts
	I0719 05:17:42.206866  176647 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:17:42.218814  176647 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:17:42.347986  176647 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:17:42.372416  176647 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291 for IP: 192.168.61.237
	I0719 05:17:42.372449  176647 certs.go:194] generating shared ca certs ...
	I0719 05:17:42.372478  176647 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:17:42.372667  176647 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 05:17:42.372727  176647 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 05:17:42.372750  176647 certs.go:256] generating profile certs ...
	I0719 05:17:42.372832  176647 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/client.key
	I0719 05:17:42.372854  176647 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/client.crt with IP's: []
	I0719 05:17:42.458701  176647 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/client.crt ...
	I0719 05:17:42.458745  176647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/client.crt: {Name:mkeb85e05dcedad8f4b26ba53ab8666a559a5e0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:17:42.458951  176647 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/client.key ...
	I0719 05:17:42.458969  176647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/client.key: {Name:mkfa020a5c72a362e82a22f54ab1d1bf920ad05f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:17:42.459056  176647 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.key.a3ac8123
	I0719 05:17:42.459071  176647 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.crt.a3ac8123 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.237]
	I0719 05:17:42.729904  176647 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.crt.a3ac8123 ...
	I0719 05:17:42.729931  176647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.crt.a3ac8123: {Name:mka663cbeab1c0928c70e60ce7fc1c998b389c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:17:42.774591  176647 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.key.a3ac8123 ...
	I0719 05:17:42.774637  176647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.key.a3ac8123: {Name:mke71e334b85b293048501ac68a1d06fef024bff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:17:42.774792  176647 certs.go:381] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.crt.a3ac8123 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.crt
	I0719 05:17:42.774889  176647 certs.go:385] copying /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.key.a3ac8123 -> /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.key
	I0719 05:17:42.774962  176647 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/proxy-client.key
	I0719 05:17:42.774984  176647 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/proxy-client.crt with IP's: []
	I0719 05:17:43.002098  176647 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/proxy-client.crt ...
	I0719 05:17:43.002129  176647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/proxy-client.crt: {Name:mkc3774efac7edebe1ea3ebd23dbd2c4a321c677 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:17:43.002304  176647 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/proxy-client.key ...
	I0719 05:17:43.002327  176647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/proxy-client.key: {Name:mk79a235fb3ed8ede26b3feec69f78d7466f750f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:17:43.002515  176647 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 05:17:43.002554  176647 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 05:17:43.002564  176647 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 05:17:43.002585  176647 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 05:17:43.002608  176647 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 05:17:43.002627  176647 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 05:17:43.002662  176647 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 05:17:43.003205  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:17:43.039464  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 05:17:43.071350  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:17:43.103909  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:17:43.131021  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0719 05:17:43.157315  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 05:17:43.187474  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 05:17:43.223009  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/old-k8s-version-901291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 05:17:43.253633  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:17:43.287929  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 05:17:43.312316  176647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 05:17:43.334692  176647 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 05:17:43.352628  176647 ssh_runner.go:195] Run: openssl version
	I0719 05:17:43.360307  176647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:17:43.372088  176647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:17:43.378063  176647 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:17:43.378167  176647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:17:43.385206  176647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:17:43.399043  176647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 05:17:43.412247  176647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 05:17:43.417436  176647 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 05:17:43.417495  176647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 05:17:43.423213  176647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 05:17:43.434745  176647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 05:17:43.445556  176647 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 05:17:43.450419  176647 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 05:17:43.450481  176647 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 05:17:43.455813  176647 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:17:43.470198  176647 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:17:43.475168  176647 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 05:17:43.475229  176647 kubeadm.go:392] StartCluster: {Name:old-k8s-version-901291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-901291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.237 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:17:43.475323  176647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 05:17:43.475374  176647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 05:17:43.517428  176647 cri.go:89] found id: ""
	I0719 05:17:43.517511  176647 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 05:17:43.530880  176647 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 05:17:43.543554  176647 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 05:17:43.555990  176647 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:17:43.556014  176647 kubeadm.go:157] found existing configuration files:
	
	I0719 05:17:43.556071  176647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 05:17:43.566592  176647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:17:43.566657  176647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 05:17:43.578568  176647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 05:17:43.588580  176647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:17:43.588648  176647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 05:17:43.599004  176647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 05:17:43.609504  176647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:17:43.609579  176647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 05:17:43.619159  176647 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 05:17:43.627953  176647 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:17:43.628040  176647 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 05:17:43.638429  176647 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 05:17:43.931620  176647 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 05:17:42.776066  177311 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 05:17:42.776317  177311 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 05:17:42.776350  177311 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 05:17:42.791660  177311 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I0719 05:17:42.792107  177311 main.go:141] libmachine: () Calling .GetVersion
	I0719 05:17:42.792665  177311 main.go:141] libmachine: Using API Version  1
	I0719 05:17:42.792689  177311 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 05:17:42.793055  177311 main.go:141] libmachine: () Calling .GetMachineName
	I0719 05:17:42.793306  177311 main.go:141] libmachine: (no-preload-783098) Calling .GetMachineName
	I0719 05:17:42.793461  177311 main.go:141] libmachine: (no-preload-783098) Calling .DriverName
	I0719 05:17:42.793632  177311 start.go:159] libmachine.API.Create for "no-preload-783098" (driver="kvm2")
	I0719 05:17:42.793659  177311 client.go:168] LocalClient.Create starting
	I0719 05:17:42.793695  177311 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem
	I0719 05:17:42.793737  177311 main.go:141] libmachine: Decoding PEM data...
	I0719 05:17:42.793755  177311 main.go:141] libmachine: Parsing certificate...
	I0719 05:17:42.793830  177311 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem
	I0719 05:17:42.793863  177311 main.go:141] libmachine: Decoding PEM data...
	I0719 05:17:42.793881  177311 main.go:141] libmachine: Parsing certificate...
	I0719 05:17:42.793905  177311 main.go:141] libmachine: Running pre-create checks...
	I0719 05:17:42.793916  177311 main.go:141] libmachine: (no-preload-783098) Calling .PreCreateCheck
	I0719 05:17:42.794310  177311 main.go:141] libmachine: (no-preload-783098) Calling .GetConfigRaw
	I0719 05:17:42.794783  177311 main.go:141] libmachine: Creating machine...
	I0719 05:17:42.794801  177311 main.go:141] libmachine: (no-preload-783098) Calling .Create
	I0719 05:17:42.794953  177311 main.go:141] libmachine: (no-preload-783098) Creating KVM machine...
	I0719 05:17:42.796268  177311 main.go:141] libmachine: (no-preload-783098) DBG | found existing default KVM network
	I0719 05:17:42.798000  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:42.797831  177410 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7e:65:da} reservation:<nil>}
	I0719 05:17:42.798915  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:42.798818  177410 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:43:54:1c} reservation:<nil>}
	I0719 05:17:42.799893  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:42.799814  177410 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:f7:37:4b} reservation:<nil>}
	I0719 05:17:42.801022  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:42.800940  177410 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a3d50}
	I0719 05:17:42.801092  177311 main.go:141] libmachine: (no-preload-783098) DBG | created network xml: 
	I0719 05:17:42.801119  177311 main.go:141] libmachine: (no-preload-783098) DBG | <network>
	I0719 05:17:42.801129  177311 main.go:141] libmachine: (no-preload-783098) DBG |   <name>mk-no-preload-783098</name>
	I0719 05:17:42.801136  177311 main.go:141] libmachine: (no-preload-783098) DBG |   <dns enable='no'/>
	I0719 05:17:42.801148  177311 main.go:141] libmachine: (no-preload-783098) DBG |   
	I0719 05:17:42.801169  177311 main.go:141] libmachine: (no-preload-783098) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0719 05:17:42.801182  177311 main.go:141] libmachine: (no-preload-783098) DBG |     <dhcp>
	I0719 05:17:42.801192  177311 main.go:141] libmachine: (no-preload-783098) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0719 05:17:42.801204  177311 main.go:141] libmachine: (no-preload-783098) DBG |     </dhcp>
	I0719 05:17:42.801214  177311 main.go:141] libmachine: (no-preload-783098) DBG |   </ip>
	I0719 05:17:42.801221  177311 main.go:141] libmachine: (no-preload-783098) DBG |   
	I0719 05:17:42.801229  177311 main.go:141] libmachine: (no-preload-783098) DBG | </network>
	I0719 05:17:42.801235  177311 main.go:141] libmachine: (no-preload-783098) DBG | 
	I0719 05:17:42.937925  177311 main.go:141] libmachine: (no-preload-783098) DBG | trying to create private KVM network mk-no-preload-783098 192.168.72.0/24...
	I0719 05:17:43.019025  177311 main.go:141] libmachine: (no-preload-783098) DBG | private KVM network mk-no-preload-783098 192.168.72.0/24 created
	I0719 05:17:43.019073  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:43.018987  177410 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 05:17:43.019092  177311 main.go:141] libmachine: (no-preload-783098) Setting up store path in /home/jenkins/minikube-integration/19302-122995/.minikube/machines/no-preload-783098 ...
	I0719 05:17:43.019103  177311 main.go:141] libmachine: (no-preload-783098) Building disk image from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 05:17:43.019188  177311 main.go:141] libmachine: (no-preload-783098) Downloading /home/jenkins/minikube-integration/19302-122995/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 05:17:43.289112  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:43.288973  177410 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/no-preload-783098/id_rsa...
	I0719 05:17:43.942086  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:43.941942  177410 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/no-preload-783098/no-preload-783098.rawdisk...
	I0719 05:17:43.942118  177311 main.go:141] libmachine: (no-preload-783098) DBG | Writing magic tar header
	I0719 05:17:43.942138  177311 main.go:141] libmachine: (no-preload-783098) DBG | Writing SSH key tar header
	I0719 05:17:43.942157  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:43.942128  177410 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/no-preload-783098 ...
	I0719 05:17:43.942274  177311 main.go:141] libmachine: (no-preload-783098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines/no-preload-783098
	I0719 05:17:43.942302  177311 main.go:141] libmachine: (no-preload-783098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube/machines
	I0719 05:17:43.942311  177311 main.go:141] libmachine: (no-preload-783098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 05:17:43.942324  177311 main.go:141] libmachine: (no-preload-783098) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines/no-preload-783098 (perms=drwx------)
	I0719 05:17:43.942345  177311 main.go:141] libmachine: (no-preload-783098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19302-122995
	I0719 05:17:43.942369  177311 main.go:141] libmachine: (no-preload-783098) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0719 05:17:43.942379  177311 main.go:141] libmachine: (no-preload-783098) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube/machines (perms=drwxr-xr-x)
	I0719 05:17:43.942393  177311 main.go:141] libmachine: (no-preload-783098) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995/.minikube (perms=drwxr-xr-x)
	I0719 05:17:43.942403  177311 main.go:141] libmachine: (no-preload-783098) Setting executable bit set on /home/jenkins/minikube-integration/19302-122995 (perms=drwxrwxr-x)
	I0719 05:17:43.942413  177311 main.go:141] libmachine: (no-preload-783098) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0719 05:17:43.942421  177311 main.go:141] libmachine: (no-preload-783098) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0719 05:17:43.942431  177311 main.go:141] libmachine: (no-preload-783098) Creating domain...
	I0719 05:17:43.942463  177311 main.go:141] libmachine: (no-preload-783098) DBG | Checking permissions on dir: /home/jenkins
	I0719 05:17:43.942489  177311 main.go:141] libmachine: (no-preload-783098) DBG | Checking permissions on dir: /home
	I0719 05:17:43.942501  177311 main.go:141] libmachine: (no-preload-783098) DBG | Skipping /home - not owner
	I0719 05:17:43.943986  177311 main.go:141] libmachine: (no-preload-783098) define libvirt domain using xml: 
	I0719 05:17:43.944012  177311 main.go:141] libmachine: (no-preload-783098) <domain type='kvm'>
	I0719 05:17:43.944024  177311 main.go:141] libmachine: (no-preload-783098)   <name>no-preload-783098</name>
	I0719 05:17:43.944031  177311 main.go:141] libmachine: (no-preload-783098)   <memory unit='MiB'>2200</memory>
	I0719 05:17:43.944039  177311 main.go:141] libmachine: (no-preload-783098)   <vcpu>2</vcpu>
	I0719 05:17:43.944045  177311 main.go:141] libmachine: (no-preload-783098)   <features>
	I0719 05:17:43.944054  177311 main.go:141] libmachine: (no-preload-783098)     <acpi/>
	I0719 05:17:43.944068  177311 main.go:141] libmachine: (no-preload-783098)     <apic/>
	I0719 05:17:43.944076  177311 main.go:141] libmachine: (no-preload-783098)     <pae/>
	I0719 05:17:43.944082  177311 main.go:141] libmachine: (no-preload-783098)     
	I0719 05:17:43.944089  177311 main.go:141] libmachine: (no-preload-783098)   </features>
	I0719 05:17:43.944096  177311 main.go:141] libmachine: (no-preload-783098)   <cpu mode='host-passthrough'>
	I0719 05:17:43.944104  177311 main.go:141] libmachine: (no-preload-783098)   
	I0719 05:17:43.944111  177311 main.go:141] libmachine: (no-preload-783098)   </cpu>
	I0719 05:17:43.944119  177311 main.go:141] libmachine: (no-preload-783098)   <os>
	I0719 05:17:43.944126  177311 main.go:141] libmachine: (no-preload-783098)     <type>hvm</type>
	I0719 05:17:43.944134  177311 main.go:141] libmachine: (no-preload-783098)     <boot dev='cdrom'/>
	I0719 05:17:43.944141  177311 main.go:141] libmachine: (no-preload-783098)     <boot dev='hd'/>
	I0719 05:17:43.944149  177311 main.go:141] libmachine: (no-preload-783098)     <bootmenu enable='no'/>
	I0719 05:17:43.944156  177311 main.go:141] libmachine: (no-preload-783098)   </os>
	I0719 05:17:43.944164  177311 main.go:141] libmachine: (no-preload-783098)   <devices>
	I0719 05:17:43.944177  177311 main.go:141] libmachine: (no-preload-783098)     <disk type='file' device='cdrom'>
	I0719 05:17:43.944190  177311 main.go:141] libmachine: (no-preload-783098)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/no-preload-783098/boot2docker.iso'/>
	I0719 05:17:43.944212  177311 main.go:141] libmachine: (no-preload-783098)       <target dev='hdc' bus='scsi'/>
	I0719 05:17:43.944221  177311 main.go:141] libmachine: (no-preload-783098)       <readonly/>
	I0719 05:17:43.944228  177311 main.go:141] libmachine: (no-preload-783098)     </disk>
	I0719 05:17:43.944237  177311 main.go:141] libmachine: (no-preload-783098)     <disk type='file' device='disk'>
	I0719 05:17:43.944246  177311 main.go:141] libmachine: (no-preload-783098)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0719 05:17:43.944259  177311 main.go:141] libmachine: (no-preload-783098)       <source file='/home/jenkins/minikube-integration/19302-122995/.minikube/machines/no-preload-783098/no-preload-783098.rawdisk'/>
	I0719 05:17:43.944267  177311 main.go:141] libmachine: (no-preload-783098)       <target dev='hda' bus='virtio'/>
	I0719 05:17:43.944275  177311 main.go:141] libmachine: (no-preload-783098)     </disk>
	I0719 05:17:43.944282  177311 main.go:141] libmachine: (no-preload-783098)     <interface type='network'>
	I0719 05:17:43.944292  177311 main.go:141] libmachine: (no-preload-783098)       <source network='mk-no-preload-783098'/>
	I0719 05:17:43.944298  177311 main.go:141] libmachine: (no-preload-783098)       <model type='virtio'/>
	I0719 05:17:43.944307  177311 main.go:141] libmachine: (no-preload-783098)     </interface>
	I0719 05:17:43.944313  177311 main.go:141] libmachine: (no-preload-783098)     <interface type='network'>
	I0719 05:17:43.944326  177311 main.go:141] libmachine: (no-preload-783098)       <source network='default'/>
	I0719 05:17:43.944330  177311 main.go:141] libmachine: (no-preload-783098)       <model type='virtio'/>
	I0719 05:17:43.944336  177311 main.go:141] libmachine: (no-preload-783098)     </interface>
	I0719 05:17:43.944343  177311 main.go:141] libmachine: (no-preload-783098)     <serial type='pty'>
	I0719 05:17:43.944354  177311 main.go:141] libmachine: (no-preload-783098)       <target port='0'/>
	I0719 05:17:43.944360  177311 main.go:141] libmachine: (no-preload-783098)     </serial>
	I0719 05:17:43.944368  177311 main.go:141] libmachine: (no-preload-783098)     <console type='pty'>
	I0719 05:17:43.944376  177311 main.go:141] libmachine: (no-preload-783098)       <target type='serial' port='0'/>
	I0719 05:17:43.944383  177311 main.go:141] libmachine: (no-preload-783098)     </console>
	I0719 05:17:43.944390  177311 main.go:141] libmachine: (no-preload-783098)     <rng model='virtio'>
	I0719 05:17:43.944400  177311 main.go:141] libmachine: (no-preload-783098)       <backend model='random'>/dev/random</backend>
	I0719 05:17:43.944407  177311 main.go:141] libmachine: (no-preload-783098)     </rng>
	I0719 05:17:43.944414  177311 main.go:141] libmachine: (no-preload-783098)     
	I0719 05:17:43.944421  177311 main.go:141] libmachine: (no-preload-783098)     
	I0719 05:17:43.944428  177311 main.go:141] libmachine: (no-preload-783098)   </devices>
	I0719 05:17:43.944434  177311 main.go:141] libmachine: (no-preload-783098) </domain>
	I0719 05:17:43.944443  177311 main.go:141] libmachine: (no-preload-783098) 
	I0719 05:17:44.047857  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:78:17:a4 in network default
	I0719 05:17:44.048566  177311 main.go:141] libmachine: (no-preload-783098) Ensuring networks are active...
	I0719 05:17:44.048594  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:44.049475  177311 main.go:141] libmachine: (no-preload-783098) Ensuring network default is active
	I0719 05:17:44.049890  177311 main.go:141] libmachine: (no-preload-783098) Ensuring network mk-no-preload-783098 is active
	I0719 05:17:44.050477  177311 main.go:141] libmachine: (no-preload-783098) Getting domain xml...
	I0719 05:17:44.051334  177311 main.go:141] libmachine: (no-preload-783098) Creating domain...
	I0719 05:17:45.773484  177311 main.go:141] libmachine: (no-preload-783098) Waiting to get IP...
	I0719 05:17:45.774263  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:45.774740  177311 main.go:141] libmachine: (no-preload-783098) DBG | unable to find current IP address of domain no-preload-783098 in network mk-no-preload-783098
	I0719 05:17:45.774783  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:45.774722  177410 retry.go:31] will retry after 232.715959ms: waiting for machine to come up
	I0719 05:17:46.009186  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:46.009762  177311 main.go:141] libmachine: (no-preload-783098) DBG | unable to find current IP address of domain no-preload-783098 in network mk-no-preload-783098
	I0719 05:17:46.009793  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:46.009710  177410 retry.go:31] will retry after 310.768623ms: waiting for machine to come up
	I0719 05:17:46.322431  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:46.322987  177311 main.go:141] libmachine: (no-preload-783098) DBG | unable to find current IP address of domain no-preload-783098 in network mk-no-preload-783098
	I0719 05:17:46.323020  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:46.322932  177410 retry.go:31] will retry after 432.412495ms: waiting for machine to come up
	I0719 05:17:43.926418  176994 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0719 05:17:43.937907  176994 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:17:43.950834  176994 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:17:43.963578  176994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:17:44.122578  176994 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0719 05:17:50.446137  176994 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.323505253s)
	I0719 05:17:50.446177  176994 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0719 05:17:50.446248  176994 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0719 05:17:50.451617  176994 start.go:563] Will wait 60s for crictl version
	I0719 05:17:50.451689  176994 ssh_runner.go:195] Run: which crictl
	I0719 05:17:50.455358  176994 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:17:50.500886  176994 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0719 05:17:50.500976  176994 ssh_runner.go:195] Run: crio --version
	I0719 05:17:50.533355  176994 ssh_runner.go:195] Run: crio --version
	I0719 05:17:50.561392  176994 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0719 05:17:46.756458  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:46.756937  177311 main.go:141] libmachine: (no-preload-783098) DBG | unable to find current IP address of domain no-preload-783098 in network mk-no-preload-783098
	I0719 05:17:46.756961  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:46.756887  177410 retry.go:31] will retry after 473.094222ms: waiting for machine to come up
	I0719 05:17:47.231767  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:47.232362  177311 main.go:141] libmachine: (no-preload-783098) DBG | unable to find current IP address of domain no-preload-783098 in network mk-no-preload-783098
	I0719 05:17:47.232399  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:47.232284  177410 retry.go:31] will retry after 481.485846ms: waiting for machine to come up
	I0719 05:17:47.714895  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:47.715445  177311 main.go:141] libmachine: (no-preload-783098) DBG | unable to find current IP address of domain no-preload-783098 in network mk-no-preload-783098
	I0719 05:17:47.715479  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:47.715403  177410 retry.go:31] will retry after 941.162704ms: waiting for machine to come up
	I0719 05:17:48.658304  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:48.658852  177311 main.go:141] libmachine: (no-preload-783098) DBG | unable to find current IP address of domain no-preload-783098 in network mk-no-preload-783098
	I0719 05:17:48.658882  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:48.658798  177410 retry.go:31] will retry after 756.714782ms: waiting for machine to come up
	I0719 05:17:49.416682  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:49.417240  177311 main.go:141] libmachine: (no-preload-783098) DBG | unable to find current IP address of domain no-preload-783098 in network mk-no-preload-783098
	I0719 05:17:49.417270  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:49.417185  177410 retry.go:31] will retry after 1.444259218s: waiting for machine to come up
	I0719 05:17:50.862583  177311 main.go:141] libmachine: (no-preload-783098) DBG | domain no-preload-783098 has defined MAC address 52:54:00:f0:ba:67 in network mk-no-preload-783098
	I0719 05:17:50.863123  177311 main.go:141] libmachine: (no-preload-783098) DBG | unable to find current IP address of domain no-preload-783098 in network mk-no-preload-783098
	I0719 05:17:50.863154  177311 main.go:141] libmachine: (no-preload-783098) DBG | I0719 05:17:50.863074  177410 retry.go:31] will retry after 1.555525768s: waiting for machine to come up
	I0719 05:17:50.562602  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) Calling .GetIP
	I0719 05:17:50.565597  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:50.565978  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:77:3f:2e", ip: ""} in network mk-kubernetes-upgrade-678139: {Iface:virbr2 ExpiryTime:2024-07-19 06:16:41 +0000 UTC Type:0 Mac:52:54:00:77:3f:2e Iaid: IPaddr:192.168.50.182 Prefix:24 Hostname:kubernetes-upgrade-678139 Clientid:01:52:54:00:77:3f:2e}
	I0719 05:17:50.566010  176994 main.go:141] libmachine: (kubernetes-upgrade-678139) DBG | domain kubernetes-upgrade-678139 has defined IP address 192.168.50.182 and MAC address 52:54:00:77:3f:2e in network mk-kubernetes-upgrade-678139
	I0719 05:17:50.566206  176994 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0719 05:17:50.570302  176994 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-678139 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-678139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 05:17:50.570423  176994 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 05:17:50.570483  176994 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 05:17:50.611849  176994 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 05:17:50.611874  176994 crio.go:433] Images already preloaded, skipping extraction
	I0719 05:17:50.611927  176994 ssh_runner.go:195] Run: sudo crictl images --output json
	I0719 05:17:50.644362  176994 crio.go:514] all images are preloaded for cri-o runtime.
	I0719 05:17:50.644386  176994 cache_images.go:84] Images are preloaded, skipping loading
	I0719 05:17:50.644397  176994 kubeadm.go:934] updating node { 192.168.50.182 8443 v1.31.0-beta.0 crio true true} ...
	I0719 05:17:50.644548  176994 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-678139 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-678139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:17:50.644667  176994 ssh_runner.go:195] Run: crio config
	I0719 05:17:50.691646  176994 cni.go:84] Creating CNI manager for ""
	I0719 05:17:50.691667  176994 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 05:17:50.691676  176994 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 05:17:50.691697  176994 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.182 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-678139 NodeName:kubernetes-upgrade-678139 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 05:17:50.691818  176994 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-678139"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 05:17:50.691875  176994 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0719 05:17:50.702770  176994 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 05:17:50.702839  176994 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 05:17:50.712344  176994 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0719 05:17:50.727779  176994 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0719 05:17:50.743861  176994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I0719 05:17:50.759068  176994 ssh_runner.go:195] Run: grep 192.168.50.182	control-plane.minikube.internal$ /etc/hosts
	I0719 05:17:50.762620  176994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:17:50.886999  176994 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:17:50.902408  176994 certs.go:68] Setting up /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139 for IP: 192.168.50.182
	I0719 05:17:50.902435  176994 certs.go:194] generating shared ca certs ...
	I0719 05:17:50.902456  176994 certs.go:226] acquiring lock for ca certs: {Name:mk4073377b5f511f5cfaf63e5b0f12377e731a57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:17:50.902658  176994 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key
	I0719 05:17:50.902766  176994 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key
	I0719 05:17:50.902786  176994 certs.go:256] generating profile certs ...
	I0719 05:17:50.902897  176994 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/client.key
	I0719 05:17:50.902968  176994 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.key.7e935d4b
	I0719 05:17:50.903018  176994 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.key
	I0719 05:17:50.903148  176994 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem (1338 bytes)
	W0719 05:17:50.903180  176994 certs.go:480] ignoring /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170_empty.pem, impossibly tiny 0 bytes
	I0719 05:17:50.903187  176994 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca-key.pem (1679 bytes)
	I0719 05:17:50.903207  176994 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/ca.pem (1082 bytes)
	I0719 05:17:50.903225  176994 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/cert.pem (1123 bytes)
	I0719 05:17:50.903245  176994 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/certs/key.pem (1679 bytes)
	I0719 05:17:50.903276  176994 certs.go:484] found cert: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem (1708 bytes)
	I0719 05:17:50.903856  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:17:50.926526  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 05:17:50.949194  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:17:50.970823  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:17:50.993642  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 05:17:51.015747  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 05:17:51.037819  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 05:17:51.059742  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/kubernetes-upgrade-678139/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 05:17:51.082234  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:17:51.105107  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/certs/130170.pem --> /usr/share/ca-certificates/130170.pem (1338 bytes)
	I0719 05:17:51.126879  176994 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/ssl/certs/1301702.pem --> /usr/share/ca-certificates/1301702.pem (1708 bytes)
	I0719 05:17:51.149748  176994 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 05:17:51.166148  176994 ssh_runner.go:195] Run: openssl version
	I0719 05:17:51.172064  176994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/130170.pem && ln -fs /usr/share/ca-certificates/130170.pem /etc/ssl/certs/130170.pem"
	I0719 05:17:51.182633  176994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/130170.pem
	I0719 05:17:51.186652  176994 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 04:19 /usr/share/ca-certificates/130170.pem
	I0719 05:17:51.186714  176994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/130170.pem
	I0719 05:17:51.191910  176994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/130170.pem /etc/ssl/certs/51391683.0"
	I0719 05:17:51.201742  176994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1301702.pem && ln -fs /usr/share/ca-certificates/1301702.pem /etc/ssl/certs/1301702.pem"
	I0719 05:17:51.214024  176994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1301702.pem
	I0719 05:17:51.218414  176994 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 04:19 /usr/share/ca-certificates/1301702.pem
	I0719 05:17:51.218480  176994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1301702.pem
	I0719 05:17:51.224120  176994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1301702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:17:51.234328  176994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:17:51.245295  176994 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:17:51.249585  176994 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:38 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:17:51.249667  176994 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:17:51.257025  176994 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:17:51.266167  176994 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:17:51.270565  176994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 05:17:51.276955  176994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 05:17:51.285811  176994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 05:17:51.291932  176994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 05:17:51.299676  176994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 05:17:51.308677  176994 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 05:17:51.316896  176994 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-678139 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-678139 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.182 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:17:51.317018  176994 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0719 05:17:51.317106  176994 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0719 05:17:51.368223  176994 cri.go:89] found id: "f8d9695c40c1a07bcf34870fcdfac1dcaf980e44e219e9071561c817d0eab3c5"
	I0719 05:17:51.368258  176994 cri.go:89] found id: "c7e09c1fc5932e92fd249a5a998a3d0997abebedcb47ca8f3036ce5c4b6ba980"
	I0719 05:17:51.368271  176994 cri.go:89] found id: "f9a45bb5095f84ae8fd6cd4a4a5a6c47895dfef7a6f6aa0fbf720024cfcdd2fd"
	I0719 05:17:51.368276  176994 cri.go:89] found id: "82cc295e180519c4b8a3efea9f80cdc2faa0e77964aded2d311eab8f129e280c"
	I0719 05:17:51.368280  176994 cri.go:89] found id: "aa4768acf72ce36ad24f1357d853689140b3663f75b3ccef04cf3b1fee65e320"
	I0719 05:17:51.368284  176994 cri.go:89] found id: "0aab590d1cde5936edd1661a7d38caf0025eba3b3162f6b82e566440bf6e7e58"
	I0719 05:17:51.368288  176994 cri.go:89] found id: "67b8f05eb247de04b2ab7e06179d48f36a01742c5868717080538be4478d5c16"
	I0719 05:17:51.368292  176994 cri.go:89] found id: "774e1f915cd8174aa29461b789524f64f617e3d9cc102e144967424ef96afa01"
	I0719 05:17:51.368296  176994 cri.go:89] found id: ""
	I0719 05:17:51.368347  176994 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.400143081Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a0b27a1e5bceb5e0e43716c08b90e0367bd5f6f8303fe8e62e3dd0c3d8af9e21,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b1fa432e-ba36-482f-8ba3-645e19a122d7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721366278698887277,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fa432e-ba36-482f-8ba3-645e19a122d7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\
":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-19T05:17:58.365799038Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1286049b10a47ea246a23e25f64895334aee913eddc9b9fd72b314023bfe9ff3,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-b58zr,Uid:ef69b4f1-4269-4939-8f68-3d52b1734a63,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721366278694195632,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-b58zr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef69b4f1-4269-4939-8f68-3d52b1734a63,k8s-
app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T05:17:58.365803508Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:05986a737248e75ab104f952fd3f73f92a54fd8efa3e1a5e0a91e7d1aa2af104,Metadata:&PodSandboxMetadata{Name:coredns-5cfdc65f69-9dm9s,Uid:b11a9792-a256-4533-aa0c-a17b135e3911,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721366278681681082,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5cfdc65f69-9dm9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11a9792-a256-4533-aa0c-a17b135e3911,k8s-app: kube-dns,pod-template-hash: 5cfdc65f69,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T05:17:58.365800576Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:338a48f6f0b953b047e98c897cd1fcd4cc683d55af0c686ddf7b5c7572b3f33f,Metadata:&PodSandboxMetadata{Name:kube-proxy-4tvdc,Uid:46eed40f-6539-4077-ad05-47338886b953,N
amespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721366278679594073,Labels:map[string]string{controller-revision-hash: 6558c48888,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4tvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eed40f-6539-4077-ad05-47338886b953,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-19T05:17:58.365790639Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b53ae402e511c88443d0c03a2359ae7b951068ac0228dfddf7cb93987a4c2ed,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-678139,Uid:dab2b8a4a9d85132a00d7897f5bbd2be,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721366273854098444,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dab2b8a4a9d85132a00d789
7f5bbd2be,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dab2b8a4a9d85132a00d7897f5bbd2be,kubernetes.io/config.seen: 2024-07-19T05:17:53.384799426Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:58b1993058bdcb0b60cb73060092035443773764d0e625206b8ebb42c22e498a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-678139,Uid:8b065617a689e29b19d9702df34e2576,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721366273842960597,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b065617a689e29b19d9702df34e2576,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.182:8443,kubernetes.io/config.hash: 8b065617a689e29b19d9702df34e2576,kubernetes.io/config.seen: 2024-07-19T05:17:53.384794929Z,kubernetes.i
o/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f55833907a71ae67f93aed27b9109dd6156a2c8cf380493923f7da8d6136ee37,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-678139,Uid:8cbf0fa47da16b9db337d55f9aa2f800,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1721366273833986485,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbf0fa47da16b9db337d55f9aa2f800,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8cbf0fa47da16b9db337d55f9aa2f800,kubernetes.io/config.seen: 2024-07-19T05:17:53.384796566Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd937e56b51533af44c39d5ed27a8557f4547f9386ef037df9e365c50dba6269,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-678139,Uid:54fc7244032af6fbc9cec1931b83b182,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1721366273818843819,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54fc7244032af6fbc9cec1931b83b182,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.182:2379,kubernetes.io/config.hash: 54fc7244032af6fbc9cec1931b83b182,kubernetes.io/config.seen: 2024-07-19T05:17:53.384790409Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f0e3d4af-ac47-4df3-a6e1-a3bd57b043a9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.403132253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b313d94-52f9-4afc-aee9-987d627f1752 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.403193878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b313d94-52f9-4afc-aee9-987d627f1752 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.403784257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60a85701a92a40dbe3781e3c820bb830869ef84e113e52e88cc37b5e372e461b,PodSandboxId:05986a737248e75ab104f952fd3f73f92a54fd8efa3e1a5e0a91e7d1aa2af104,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721366279300826210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9dm9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11a9792-a256-4533-aa0c-a17b135e3911,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e7e53d200b5e0364805774dda1e3eb7b828351b3c283263d9495aa502e5056,PodSandboxId:1286049b10a47ea246a23e25f64895334aee913eddc9b9fd72b314023bfe9ff3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721366279245204902,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-b58zr,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ef69b4f1-4269-4939-8f68-3d52b1734a63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a236b975cf5c0cedfbc41b28b039dca8cfea3651d8dfd4f890b9d8c1530e3f,PodSandboxId:a0b27a1e5bceb5e0e43716c08b90e0367bd5f6f8303fe8e62e3dd0c3d8af9e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721366278884631724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fa432e-ba36-482f-8ba3-645e19a122d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa8374477a9c46b69a08e175ff1156d05650d180d07664927dc814da71a770f,PodSandboxId:338a48f6f0b953b047e98c897cd1fcd4cc683d55af0c686ddf7b5c7572b3f33f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,C
reatedAt:1721366278846781192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eed40f-6539-4077-ad05-47338886b953,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ae81ca6dda63d0f13c9dc45c350bca67dbb100fb4f433db3d2873fd5f97672,PodSandboxId:f55833907a71ae67f93aed27b9109dd6156a2c8cf380493923f7da8d6136ee37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:17213662741
22288756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbf0fa47da16b9db337d55f9aa2f800,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4907eab3221dfcb85b318ebbf9fc033e78f8bc88189027fdacd0326dbf52b93,PodSandboxId:5b53ae402e511c88443d0c03a2359ae7b951068ac0228dfddf7cb93987a4c2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedA
t:1721366274092062631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dab2b8a4a9d85132a00d7897f5bbd2be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06aba00c601ec1d0b56b19c038b5414b9bd5922fe00864d9cbafff8ba56ccb14,PodSandboxId:dd937e56b51533af44c39d5ed27a8557f4547f9386ef037df9e365c50dba6269,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721366274050
879497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54fc7244032af6fbc9cec1931b83b182,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0613c6c6f1e6ec29d5a721422b4c849786adc1970c5dc82815bc8de4dbb4699,PodSandboxId:58b1993058bdcb0b60cb73060092035443773764d0e625206b8ebb42c22e498a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721366274045738069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b065617a689e29b19d9702df34e2576,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b313d94-52f9-4afc-aee9-987d627f1752 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.420130661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d8e7431-75b0-45cc-be01-ec9320c5735c name=/runtime.v1.RuntimeService/Version
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.420238593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d8e7431-75b0-45cc-be01-ec9320c5735c name=/runtime.v1.RuntimeService/Version
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.425678936Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86707cf9-6fb8-4aec-b71a-4b4a15322a0c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.426056806Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721366281426013812,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86707cf9-6fb8-4aec-b71a-4b4a15322a0c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.426586403Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ebe3a43-fd9a-4e39-93f3-d59e2561597b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.426640322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ebe3a43-fd9a-4e39-93f3-d59e2561597b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.426973081Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60a85701a92a40dbe3781e3c820bb830869ef84e113e52e88cc37b5e372e461b,PodSandboxId:05986a737248e75ab104f952fd3f73f92a54fd8efa3e1a5e0a91e7d1aa2af104,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721366279300826210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9dm9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11a9792-a256-4533-aa0c-a17b135e3911,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e7e53d200b5e0364805774dda1e3eb7b828351b3c283263d9495aa502e5056,PodSandboxId:1286049b10a47ea246a23e25f64895334aee913eddc9b9fd72b314023bfe9ff3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721366279245204902,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-b58zr,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ef69b4f1-4269-4939-8f68-3d52b1734a63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a236b975cf5c0cedfbc41b28b039dca8cfea3651d8dfd4f890b9d8c1530e3f,PodSandboxId:a0b27a1e5bceb5e0e43716c08b90e0367bd5f6f8303fe8e62e3dd0c3d8af9e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721366278884631724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fa432e-ba36-482f-8ba3-645e19a122d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa8374477a9c46b69a08e175ff1156d05650d180d07664927dc814da71a770f,PodSandboxId:338a48f6f0b953b047e98c897cd1fcd4cc683d55af0c686ddf7b5c7572b3f33f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,C
reatedAt:1721366278846781192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eed40f-6539-4077-ad05-47338886b953,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ae81ca6dda63d0f13c9dc45c350bca67dbb100fb4f433db3d2873fd5f97672,PodSandboxId:f55833907a71ae67f93aed27b9109dd6156a2c8cf380493923f7da8d6136ee37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:17213662741
22288756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbf0fa47da16b9db337d55f9aa2f800,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4907eab3221dfcb85b318ebbf9fc033e78f8bc88189027fdacd0326dbf52b93,PodSandboxId:5b53ae402e511c88443d0c03a2359ae7b951068ac0228dfddf7cb93987a4c2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedA
t:1721366274092062631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dab2b8a4a9d85132a00d7897f5bbd2be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06aba00c601ec1d0b56b19c038b5414b9bd5922fe00864d9cbafff8ba56ccb14,PodSandboxId:dd937e56b51533af44c39d5ed27a8557f4547f9386ef037df9e365c50dba6269,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721366274050
879497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54fc7244032af6fbc9cec1931b83b182,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0613c6c6f1e6ec29d5a721422b4c849786adc1970c5dc82815bc8de4dbb4699,PodSandboxId:58b1993058bdcb0b60cb73060092035443773764d0e625206b8ebb42c22e498a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721366274045738069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b065617a689e29b19d9702df34e2576,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7e09c1fc5932e92fd249a5a998a3d0997abebedcb47ca8f3036ce5c4b6ba980,PodSandboxId:9d0412cf73cc78db6e952eabb20ebd553608ef295fe86cc4be63617a38b3a8d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721366234807390131,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-b58zr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef69b4f1-4269-4939-8f68-3d52b1734a63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d9695c40c1a07bcf34870fcdfac1dcaf980e44e219e9071561c817d0eab3c5,PodSandboxId:ae34d8ad59b48078cf8756d79e46a9c2e249165d75335784bc6629eb1689d4d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721366234819392042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9dm9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11a9792-a256-4533-aa0c-a17b135e3911,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a45bb5095f84ae8fd6cd4a4a5a6c47895dfef7a6f6aa0fbf720024cfcdd2fd,PodSandboxId:7293aa92f351c3415605a062a1ecc2f3944c7aa90fcfb23
a121e3cfb01f1629a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721366234054592393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fa432e-ba36-482f-8ba3-645e19a122d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82cc295e180519c4b8a3efea9f80cdc2faa0e77964aded2d311eab8f129e280c,PodSandboxId:895bfcb9cf957ea468b6323f849ad9a5b47f0596ee49e8dd3ee6d6c428f3
bebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721366233298535063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eed40f-6539-4077-ad05-47338886b953,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4768acf72ce36ad24f1357d853689140b3663f75b3ccef04cf3b1fee65e320,PodSandboxId:dda85f955e3ddd74c53a296d87d17680a296f9b221b10a577835bb32d783e6d0,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721366222181274439,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54fc7244032af6fbc9cec1931b83b182,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aab590d1cde5936edd1661a7d38caf0025eba3b3162f6b82e566440bf6e7e58,PodSandboxId:70be45cbb48789fe8ce1df0e1a6f33ccfa797eaee28006817757c6a733caf9f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:
&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721366222176678017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dab2b8a4a9d85132a00d7897f5bbd2be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b8f05eb247de04b2ab7e06179d48f36a01742c5868717080538be4478d5c16,PodSandboxId:5a2537885f8aa67266d703f755b5a56d3a2091a913ab258f882bbbb08e011c99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Ima
ge:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721366222163033740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbf0fa47da16b9db337d55f9aa2f800,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e1f915cd8174aa29461b789524f64f617e3d9cc102e144967424ef96afa01,PodSandboxId:964ea1234564684fc60e0dc706bf9ce8e3c45f19a5c566866969cc353e6847c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721366222151480172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b065617a689e29b19d9702df34e2576,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ebe3a43-fd9a-4e39-93f3-d59e2561597b name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.471713485Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0618f71-813e-4a74-badd-2d15c97d4494 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.471811236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0618f71-813e-4a74-badd-2d15c97d4494 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.472953605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=728b5472-23f7-4aa1-86d4-0f86ea07eb6f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.473448124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721366281473422643,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=728b5472-23f7-4aa1-86d4-0f86ea07eb6f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.473987570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c423001a-fc9e-4e28-b659-3ec3b07cbcb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.474042177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c423001a-fc9e-4e28-b659-3ec3b07cbcb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.474402959Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60a85701a92a40dbe3781e3c820bb830869ef84e113e52e88cc37b5e372e461b,PodSandboxId:05986a737248e75ab104f952fd3f73f92a54fd8efa3e1a5e0a91e7d1aa2af104,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721366279300826210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9dm9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11a9792-a256-4533-aa0c-a17b135e3911,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e7e53d200b5e0364805774dda1e3eb7b828351b3c283263d9495aa502e5056,PodSandboxId:1286049b10a47ea246a23e25f64895334aee913eddc9b9fd72b314023bfe9ff3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721366279245204902,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-b58zr,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ef69b4f1-4269-4939-8f68-3d52b1734a63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a236b975cf5c0cedfbc41b28b039dca8cfea3651d8dfd4f890b9d8c1530e3f,PodSandboxId:a0b27a1e5bceb5e0e43716c08b90e0367bd5f6f8303fe8e62e3dd0c3d8af9e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721366278884631724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fa432e-ba36-482f-8ba3-645e19a122d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa8374477a9c46b69a08e175ff1156d05650d180d07664927dc814da71a770f,PodSandboxId:338a48f6f0b953b047e98c897cd1fcd4cc683d55af0c686ddf7b5c7572b3f33f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,C
reatedAt:1721366278846781192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eed40f-6539-4077-ad05-47338886b953,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ae81ca6dda63d0f13c9dc45c350bca67dbb100fb4f433db3d2873fd5f97672,PodSandboxId:f55833907a71ae67f93aed27b9109dd6156a2c8cf380493923f7da8d6136ee37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:17213662741
22288756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbf0fa47da16b9db337d55f9aa2f800,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4907eab3221dfcb85b318ebbf9fc033e78f8bc88189027fdacd0326dbf52b93,PodSandboxId:5b53ae402e511c88443d0c03a2359ae7b951068ac0228dfddf7cb93987a4c2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedA
t:1721366274092062631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dab2b8a4a9d85132a00d7897f5bbd2be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06aba00c601ec1d0b56b19c038b5414b9bd5922fe00864d9cbafff8ba56ccb14,PodSandboxId:dd937e56b51533af44c39d5ed27a8557f4547f9386ef037df9e365c50dba6269,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721366274050
879497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54fc7244032af6fbc9cec1931b83b182,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0613c6c6f1e6ec29d5a721422b4c849786adc1970c5dc82815bc8de4dbb4699,PodSandboxId:58b1993058bdcb0b60cb73060092035443773764d0e625206b8ebb42c22e498a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721366274045738069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b065617a689e29b19d9702df34e2576,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7e09c1fc5932e92fd249a5a998a3d0997abebedcb47ca8f3036ce5c4b6ba980,PodSandboxId:9d0412cf73cc78db6e952eabb20ebd553608ef295fe86cc4be63617a38b3a8d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721366234807390131,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-b58zr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef69b4f1-4269-4939-8f68-3d52b1734a63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d9695c40c1a07bcf34870fcdfac1dcaf980e44e219e9071561c817d0eab3c5,PodSandboxId:ae34d8ad59b48078cf8756d79e46a9c2e249165d75335784bc6629eb1689d4d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721366234819392042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9dm9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11a9792-a256-4533-aa0c-a17b135e3911,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a45bb5095f84ae8fd6cd4a4a5a6c47895dfef7a6f6aa0fbf720024cfcdd2fd,PodSandboxId:7293aa92f351c3415605a062a1ecc2f3944c7aa90fcfb23
a121e3cfb01f1629a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721366234054592393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fa432e-ba36-482f-8ba3-645e19a122d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82cc295e180519c4b8a3efea9f80cdc2faa0e77964aded2d311eab8f129e280c,PodSandboxId:895bfcb9cf957ea468b6323f849ad9a5b47f0596ee49e8dd3ee6d6c428f3
bebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721366233298535063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eed40f-6539-4077-ad05-47338886b953,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4768acf72ce36ad24f1357d853689140b3663f75b3ccef04cf3b1fee65e320,PodSandboxId:dda85f955e3ddd74c53a296d87d17680a296f9b221b10a577835bb32d783e6d0,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721366222181274439,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54fc7244032af6fbc9cec1931b83b182,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aab590d1cde5936edd1661a7d38caf0025eba3b3162f6b82e566440bf6e7e58,PodSandboxId:70be45cbb48789fe8ce1df0e1a6f33ccfa797eaee28006817757c6a733caf9f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:
&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721366222176678017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dab2b8a4a9d85132a00d7897f5bbd2be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b8f05eb247de04b2ab7e06179d48f36a01742c5868717080538be4478d5c16,PodSandboxId:5a2537885f8aa67266d703f755b5a56d3a2091a913ab258f882bbbb08e011c99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Ima
ge:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721366222163033740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbf0fa47da16b9db337d55f9aa2f800,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e1f915cd8174aa29461b789524f64f617e3d9cc102e144967424ef96afa01,PodSandboxId:964ea1234564684fc60e0dc706bf9ce8e3c45f19a5c566866969cc353e6847c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721366222151480172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b065617a689e29b19d9702df34e2576,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c423001a-fc9e-4e28-b659-3ec3b07cbcb8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.509518242Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66239131-f7fb-43d6-bf5f-af260ad0d1f6 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.509590442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66239131-f7fb-43d6-bf5f-af260ad0d1f6 name=/runtime.v1.RuntimeService/Version
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.510396919Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6e84af7b-8ca8-4167-ae66-48b1ca4cc6b6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.510748184Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1721366281510728108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6e84af7b-8ca8-4167-ae66-48b1ca4cc6b6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.511133961Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d83148dc-1c60-4bf8-8263-44bad6b73ceb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.511184905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d83148dc-1c60-4bf8-8263-44bad6b73ceb name=/runtime.v1.RuntimeService/ListContainers
	Jul 19 05:18:01 kubernetes-upgrade-678139 crio[2371]: time="2024-07-19 05:18:01.511730335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:60a85701a92a40dbe3781e3c820bb830869ef84e113e52e88cc37b5e372e461b,PodSandboxId:05986a737248e75ab104f952fd3f73f92a54fd8efa3e1a5e0a91e7d1aa2af104,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721366279300826210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9dm9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11a9792-a256-4533-aa0c-a17b135e3911,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61e7e53d200b5e0364805774dda1e3eb7b828351b3c283263d9495aa502e5056,PodSandboxId:1286049b10a47ea246a23e25f64895334aee913eddc9b9fd72b314023bfe9ff3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1721366279245204902,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-b58zr,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: ef69b4f1-4269-4939-8f68-3d52b1734a63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33a236b975cf5c0cedfbc41b28b039dca8cfea3651d8dfd4f890b9d8c1530e3f,PodSandboxId:a0b27a1e5bceb5e0e43716c08b90e0367bd5f6f8303fe8e62e3dd0c3d8af9e21,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1721366278884631724,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fa432e-ba36-482f-8ba3-645e19a122d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2aa8374477a9c46b69a08e175ff1156d05650d180d07664927dc814da71a770f,PodSandboxId:338a48f6f0b953b047e98c897cd1fcd4cc683d55af0c686ddf7b5c7572b3f33f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,C
reatedAt:1721366278846781192,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eed40f-6539-4077-ad05-47338886b953,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5ae81ca6dda63d0f13c9dc45c350bca67dbb100fb4f433db3d2873fd5f97672,PodSandboxId:f55833907a71ae67f93aed27b9109dd6156a2c8cf380493923f7da8d6136ee37,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:17213662741
22288756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbf0fa47da16b9db337d55f9aa2f800,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4907eab3221dfcb85b318ebbf9fc033e78f8bc88189027fdacd0326dbf52b93,PodSandboxId:5b53ae402e511c88443d0c03a2359ae7b951068ac0228dfddf7cb93987a4c2ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedA
t:1721366274092062631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dab2b8a4a9d85132a00d7897f5bbd2be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06aba00c601ec1d0b56b19c038b5414b9bd5922fe00864d9cbafff8ba56ccb14,PodSandboxId:dd937e56b51533af44c39d5ed27a8557f4547f9386ef037df9e365c50dba6269,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1721366274050
879497,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54fc7244032af6fbc9cec1931b83b182,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0613c6c6f1e6ec29d5a721422b4c849786adc1970c5dc82815bc8de4dbb4699,PodSandboxId:58b1993058bdcb0b60cb73060092035443773764d0e625206b8ebb42c22e498a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1721366274045738069,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b065617a689e29b19d9702df34e2576,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7e09c1fc5932e92fd249a5a998a3d0997abebedcb47ca8f3036ce5c4b6ba980,PodSandboxId:9d0412cf73cc78db6e952eabb20ebd553608ef295fe86cc4be63617a38b3a8d4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721366234807390131,Labels:map[string]string{io.kube
rnetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-b58zr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef69b4f1-4269-4939-8f68-3d52b1734a63,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d9695c40c1a07bcf34870fcdfac1dcaf980e44e219e9071561c817d0eab3c5,PodSandboxId:ae34d8ad59b48078cf8756d79e46a9c2e249165d75335784bc6629eb1689d4d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Use
rSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1721366234819392042,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-9dm9s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b11a9792-a256-4533-aa0c-a17b135e3911,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9a45bb5095f84ae8fd6cd4a4a5a6c47895dfef7a6f6aa0fbf720024cfcdd2fd,PodSandboxId:7293aa92f351c3415605a062a1ecc2f3944c7aa90fcfb23
a121e3cfb01f1629a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1721366234054592393,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1fa432e-ba36-482f-8ba3-645e19a122d7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82cc295e180519c4b8a3efea9f80cdc2faa0e77964aded2d311eab8f129e280c,PodSandboxId:895bfcb9cf957ea468b6323f849ad9a5b47f0596ee49e8dd3ee6d6c428f3
bebb,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1721366233298535063,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4tvdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46eed40f-6539-4077-ad05-47338886b953,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4768acf72ce36ad24f1357d853689140b3663f75b3ccef04cf3b1fee65e320,PodSandboxId:dda85f955e3ddd74c53a296d87d17680a296f9b221b10a577835bb32d783e6d0,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1721366222181274439,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 54fc7244032af6fbc9cec1931b83b182,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aab590d1cde5936edd1661a7d38caf0025eba3b3162f6b82e566440bf6e7e58,PodSandboxId:70be45cbb48789fe8ce1df0e1a6f33ccfa797eaee28006817757c6a733caf9f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:
&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1721366222176678017,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dab2b8a4a9d85132a00d7897f5bbd2be,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67b8f05eb247de04b2ab7e06179d48f36a01742c5868717080538be4478d5c16,PodSandboxId:5a2537885f8aa67266d703f755b5a56d3a2091a913ab258f882bbbb08e011c99,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Ima
ge:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1721366222163033740,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cbf0fa47da16b9db337d55f9aa2f800,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e1f915cd8174aa29461b789524f64f617e3d9cc102e144967424ef96afa01,PodSandboxId:964ea1234564684fc60e0dc706bf9ce8e3c45f19a5c566866969cc353e6847c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Att
empt:0,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1721366222151480172,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-678139,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b065617a689e29b19d9702df34e2576,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d83148dc-1c60-4bf8-8263-44bad6b73ceb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	60a85701a92a4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago       Running             coredns                   1                   05986a737248e       coredns-5cfdc65f69-9dm9s
	61e7e53d200b5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago       Running             coredns                   1                   1286049b10a47       coredns-5cfdc65f69-b58zr
	33a236b975cf5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 seconds ago       Running             storage-provisioner       1                   a0b27a1e5bceb       storage-provisioner
	2aa8374477a9c       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   2 seconds ago       Running             kube-proxy                1                   338a48f6f0b95       kube-proxy-4tvdc
	f5ae81ca6dda6       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   1                   f55833907a71a       kube-controller-manager-kubernetes-upgrade-678139
	a4907eab3221d       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            1                   5b53ae402e511       kube-scheduler-kubernetes-upgrade-678139
	06aba00c601ec       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago       Running             etcd                      1                   dd937e56b5153       etcd-kubernetes-upgrade-678139
	a0613c6c6f1e6       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            1                   58b1993058bdc       kube-apiserver-kubernetes-upgrade-678139
	f8d9695c40c1a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   46 seconds ago      Exited              coredns                   0                   ae34d8ad59b48       coredns-5cfdc65f69-9dm9s
	c7e09c1fc5932       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   46 seconds ago      Exited              coredns                   0                   9d0412cf73cc7       coredns-5cfdc65f69-b58zr
	f9a45bb5095f8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   47 seconds ago      Exited              storage-provisioner       0                   7293aa92f351c       storage-provisioner
	82cc295e18051       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   48 seconds ago      Exited              kube-proxy                0                   895bfcb9cf957       kube-proxy-4tvdc
	aa4768acf72ce       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   59 seconds ago      Exited              etcd                      0                   dda85f955e3dd       etcd-kubernetes-upgrade-678139
	0aab590d1cde5       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   59 seconds ago      Exited              kube-scheduler            0                   70be45cbb4878       kube-scheduler-kubernetes-upgrade-678139
	67b8f05eb247d       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   59 seconds ago      Exited              kube-controller-manager   0                   5a2537885f8aa       kube-controller-manager-kubernetes-upgrade-678139
	774e1f915cd81       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   59 seconds ago      Exited              kube-apiserver            0                   964ea12345646       kube-apiserver-kubernetes-upgrade-678139
	
	
	==> coredns [60a85701a92a40dbe3781e3c820bb830869ef84e113e52e88cc37b5e372e461b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [61e7e53d200b5e0364805774dda1e3eb7b828351b3c283263d9495aa502e5056] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [c7e09c1fc5932e92fd249a5a998a3d0997abebedcb47ca8f3036ce5c4b6ba980] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f8d9695c40c1a07bcf34870fcdfac1dcaf980e44e219e9071561c817d0eab3c5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-678139
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-678139
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 05:17:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-678139
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 05:17:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 05:17:57 +0000   Fri, 19 Jul 2024 05:17:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 05:17:57 +0000   Fri, 19 Jul 2024 05:17:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 05:17:57 +0000   Fri, 19 Jul 2024 05:17:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 05:17:57 +0000   Fri, 19 Jul 2024 05:17:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.182
	  Hostname:    kubernetes-upgrade-678139
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 797db3e0a0884f6e99dedf244b05f7e8
	  System UUID:                797db3e0-a088-4f6e-99de-df244b05f7e8
	  Boot ID:                    ed282844-870f-42e6-81ab-d0d3e6b6f477
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-9dm9s                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     49s
	  kube-system                 coredns-5cfdc65f69-b58zr                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     49s
	  kube-system                 etcd-kubernetes-upgrade-678139                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         50s
	  kube-system                 kube-apiserver-kubernetes-upgrade-678139             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-678139    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kube-proxy-4tvdc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         49s
	  kube-system                 kube-scheduler-kubernetes-upgrade-678139             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 48s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  60s (x8 over 63s)  kubelet          Node kubernetes-upgrade-678139 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 63s)  kubelet          Node kubernetes-upgrade-678139 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x7 over 63s)  kubelet          Node kubernetes-upgrade-678139 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node kubernetes-upgrade-678139 event: Registered Node kubernetes-upgrade-678139 in Controller
	  Normal  RegisteredNode           0s                 node-controller  Node kubernetes-upgrade-678139 event: Registered Node kubernetes-upgrade-678139 in Controller
	
	
	==> dmesg <==
	[  +1.531397] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.930799] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.053691] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064005] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.165020] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.132055] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.277107] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +3.893049] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +1.722148] systemd-fstab-generator[854]: Ignoring "noauto" option for root device
	[  +0.065416] kauditd_printk_skb: 158 callbacks suppressed
	[Jul19 05:17] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.177506] kauditd_printk_skb: 69 callbacks suppressed
	[ +30.569444] systemd-fstab-generator[2204]: Ignoring "noauto" option for root device
	[  +0.089381] kauditd_printk_skb: 111 callbacks suppressed
	[  +0.071144] systemd-fstab-generator[2216]: Ignoring "noauto" option for root device
	[  +0.170940] systemd-fstab-generator[2230]: Ignoring "noauto" option for root device
	[  +0.173888] systemd-fstab-generator[2242]: Ignoring "noauto" option for root device
	[  +0.407659] systemd-fstab-generator[2340]: Ignoring "noauto" option for root device
	[  +6.793534] systemd-fstab-generator[2456]: Ignoring "noauto" option for root device
	[  +0.065700] kauditd_printk_skb: 112 callbacks suppressed
	[  +2.254054] systemd-fstab-generator[2577]: Ignoring "noauto" option for root device
	[  +5.596219] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.952296] systemd-fstab-generator[3430]: Ignoring "noauto" option for root device
	
	
	==> etcd [06aba00c601ec1d0b56b19c038b5414b9bd5922fe00864d9cbafff8ba56ccb14] <==
	{"level":"info","ts":"2024-07-19T05:17:54.488149Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"121410669cdaaf0c","local-member-id":"ba72368f65a77be1","added-peer-id":"ba72368f65a77be1","added-peer-peer-urls":["https://192.168.50.182:2380"]}
	{"level":"info","ts":"2024-07-19T05:17:54.488354Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"121410669cdaaf0c","local-member-id":"ba72368f65a77be1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:17:54.488389Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:17:54.489942Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T05:17:54.500493Z","caller":"embed/etcd.go:727","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T05:17:54.500757Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ba72368f65a77be1","initial-advertise-peer-urls":["https://192.168.50.182:2380"],"listen-peer-urls":["https://192.168.50.182:2380"],"advertise-client-urls":["https://192.168.50.182:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.182:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T05:17:54.500799Z","caller":"embed/etcd.go:858","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T05:17:54.500961Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.182:2380"}
	{"level":"info","ts":"2024-07-19T05:17:54.500988Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.182:2380"}
	{"level":"info","ts":"2024-07-19T05:17:56.042936Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T05:17:56.042994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T05:17:56.043034Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 received MsgPreVoteResp from ba72368f65a77be1 at term 2"}
	{"level":"info","ts":"2024-07-19T05:17:56.043052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T05:17:56.043059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 received MsgVoteResp from ba72368f65a77be1 at term 3"}
	{"level":"info","ts":"2024-07-19T05:17:56.04307Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ba72368f65a77be1 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T05:17:56.04308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ba72368f65a77be1 elected leader ba72368f65a77be1 at term 3"}
	{"level":"info","ts":"2024-07-19T05:17:56.050446Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ba72368f65a77be1","local-member-attributes":"{Name:kubernetes-upgrade-678139 ClientURLs:[https://192.168.50.182:2379]}","request-path":"/0/members/ba72368f65a77be1/attributes","cluster-id":"121410669cdaaf0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T05:17:56.050743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T05:17:56.053157Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T05:17:56.053085Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T05:17:56.055215Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.182:2379"}
	{"level":"info","ts":"2024-07-19T05:17:56.056434Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T05:17:56.05716Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T05:17:56.062358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T05:17:56.062389Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [aa4768acf72ce36ad24f1357d853689140b3663f75b3ccef04cf3b1fee65e320] <==
	{"level":"info","ts":"2024-07-19T05:17:02.792616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ba72368f65a77be1 elected leader ba72368f65a77be1 at term 2"}
	{"level":"info","ts":"2024-07-19T05:17:02.79603Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ba72368f65a77be1","local-member-attributes":"{Name:kubernetes-upgrade-678139 ClientURLs:[https://192.168.50.182:2379]}","request-path":"/0/members/ba72368f65a77be1/attributes","cluster-id":"121410669cdaaf0c","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T05:17:02.796241Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T05:17:02.796678Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T05:17:02.799038Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T05:17:02.805998Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T05:17:02.812377Z","caller":"etcdserver/server.go:2628","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:17:02.81433Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T05:17:02.814378Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T05:17:02.815865Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-19T05:17:02.819849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.182:2379"}
	{"level":"info","ts":"2024-07-19T05:17:02.832233Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"121410669cdaaf0c","local-member-id":"ba72368f65a77be1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:17:02.859465Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:17:02.859571Z","caller":"etcdserver/server.go:2652","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:17:18.376467Z","caller":"traceutil/trace.go:171","msg":"trace[644697298] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"121.841901ms","start":"2024-07-19T05:17:18.254602Z","end":"2024-07-19T05:17:18.376444Z","steps":["trace[644697298] 'process raft request'  (duration: 121.645807ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:17:36.534382Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T05:17:36.534437Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-678139","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.182:2380"],"advertise-client-urls":["https://192.168.50.182:2379"]}
	{"level":"warn","ts":"2024-07-19T05:17:36.534503Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T05:17:36.534584Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T05:17:36.631627Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.182:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T05:17:36.631732Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.182:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T05:17:36.633352Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ba72368f65a77be1","current-leader-member-id":"ba72368f65a77be1"}
	{"level":"info","ts":"2024-07-19T05:17:36.635847Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.50.182:2380"}
	{"level":"info","ts":"2024-07-19T05:17:36.635947Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.50.182:2380"}
	{"level":"info","ts":"2024-07-19T05:17:36.635972Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-678139","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.182:2380"],"advertise-client-urls":["https://192.168.50.182:2379"]}
	
	
	==> kernel <==
	 05:18:02 up 1 min,  0 users,  load average: 1.69, 0.54, 0.19
	Linux kubernetes-upgrade-678139 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [774e1f915cd8174aa29461b789524f64f617e3d9cc102e144967424ef96afa01] <==
	W0719 05:17:36.561031       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.561141       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.561250       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.561607       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.561791       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.561894       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.561994       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.562090       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.566656       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.566808       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.566936       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567011       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567076       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567143       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567215       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567280       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567400       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567467       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567530       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567591       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567653       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567721       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567783       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567846       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 05:17:36.567891       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [a0613c6c6f1e6ec29d5a721422b4c849786adc1970c5dc82815bc8de4dbb4699] <==
	I0719 05:17:57.352848       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0719 05:17:57.452487       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 05:17:57.453056       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 05:17:57.453168       1 aggregator.go:171] initial CRD sync complete...
	I0719 05:17:57.453194       1 autoregister_controller.go:144] Starting autoregister controller
	I0719 05:17:57.453201       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 05:17:57.453205       1 cache.go:39] Caches are synced for autoregister controller
	I0719 05:17:57.485005       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 05:17:57.485039       1 policy_source.go:224] refreshing policies
	I0719 05:17:57.529079       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 05:17:57.529371       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 05:17:57.531126       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 05:17:57.531825       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 05:17:57.532108       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 05:17:57.533365       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0719 05:17:57.533443       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0719 05:17:57.560222       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0719 05:17:58.329902       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 05:17:59.284368       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 05:17:59.324198       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 05:17:59.447206       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 05:17:59.505487       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 05:17:59.520725       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 05:18:00.991228       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 05:18:01.065944       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [67b8f05eb247de04b2ab7e06179d48f36a01742c5868717080538be4478d5c16] <==
	I0719 05:17:12.427900       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="58.941µs"
	I0719 05:17:12.428221       1 shared_informer.go:320] Caches are synced for expand
	I0719 05:17:12.430990       1 shared_informer.go:320] Caches are synced for ephemeral
	I0719 05:17:12.478890       1 shared_informer.go:320] Caches are synced for PVC protection
	I0719 05:17:12.481409       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 05:17:12.487582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-678139"
	I0719 05:17:12.530428       1 shared_informer.go:320] Caches are synced for endpoint
	I0719 05:17:12.532543       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 05:17:12.532619       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-678139"
	I0719 05:17:12.534382       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 05:17:12.538414       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 05:17:12.539191       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 05:17:12.541379       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 05:17:12.563405       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 05:17:12.580398       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0719 05:17:14.129646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="86.79µs"
	I0719 05:17:14.132089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="46.027µs"
	I0719 05:17:14.185368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="104.006µs"
	I0719 05:17:14.218836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="74.086µs"
	I0719 05:17:15.138260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-678139"
	I0719 05:17:15.640646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="81.091µs"
	I0719 05:17:15.764054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="37.129807ms"
	I0719 05:17:15.764177       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="72.83µs"
	I0719 05:17:15.794178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="18.049102ms"
	I0719 05:17:15.795041       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="61.797µs"
	
	
	==> kube-controller-manager [f5ae81ca6dda63d0f13c9dc45c350bca67dbb100fb4f433db3d2873fd5f97672] <==
	I0719 05:18:01.088910       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0719 05:18:01.091177       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0719 05:18:01.254950       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 05:18:01.290477       1 shared_informer.go:320] Caches are synced for daemon sets
	I0719 05:18:01.316575       1 shared_informer.go:320] Caches are synced for taint
	I0719 05:18:01.316715       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0719 05:18:01.316817       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-678139"
	I0719 05:18:01.316870       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0719 05:18:01.333916       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0719 05:18:01.333950       1 shared_informer.go:320] Caches are synced for stateful set
	I0719 05:18:01.338248       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0719 05:18:01.357870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="19.535197ms"
	I0719 05:18:01.357975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-5cfdc65f69" duration="54.147µs"
	I0719 05:18:01.368493       1 shared_informer.go:320] Caches are synced for disruption
	I0719 05:18:01.382363       1 shared_informer.go:320] Caches are synced for deployment
	I0719 05:18:01.545194       1 shared_informer.go:320] Caches are synced for crt configmap
	I0719 05:18:01.582484       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0719 05:18:01.633353       1 shared_informer.go:320] Caches are synced for job
	I0719 05:18:01.648174       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0719 05:18:01.670969       1 shared_informer.go:320] Caches are synced for cronjob
	I0719 05:18:01.842416       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 05:18:01.882624       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 05:18:01.882656       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 05:18:01.885418       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 05:18:01.895351       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [2aa8374477a9c46b69a08e175ff1156d05650d180d07664927dc814da71a770f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0719 05:17:59.296912       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0719 05:17:59.317852       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.182"]
	E0719 05:17:59.317906       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 05:17:59.412506       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0719 05:17:59.412576       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 05:17:59.412604       1 server_linux.go:170] "Using iptables Proxier"
	I0719 05:17:59.418480       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0719 05:17:59.418808       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 05:17:59.418822       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:17:59.428946       1 config.go:197] "Starting service config controller"
	I0719 05:17:59.429104       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 05:17:59.429174       1 config.go:104] "Starting endpoint slice config controller"
	I0719 05:17:59.429181       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 05:17:59.429738       1 config.go:326] "Starting node config controller"
	I0719 05:17:59.429744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 05:17:59.529402       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 05:17:59.529491       1 shared_informer.go:320] Caches are synced for service config
	I0719 05:17:59.530136       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [82cc295e180519c4b8a3efea9f80cdc2faa0e77964aded2d311eab8f129e280c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0719 05:17:13.654407       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0719 05:17:13.682561       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.182"]
	E0719 05:17:13.682724       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0719 05:17:13.746075       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0719 05:17:13.746131       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 05:17:13.746178       1 server_linux.go:170] "Using iptables Proxier"
	I0719 05:17:13.749587       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0719 05:17:13.749960       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0719 05:17:13.749999       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:17:13.752288       1 config.go:197] "Starting service config controller"
	I0719 05:17:13.752425       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 05:17:13.752497       1 config.go:104] "Starting endpoint slice config controller"
	I0719 05:17:13.752524       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 05:17:13.756749       1 config.go:326] "Starting node config controller"
	I0719 05:17:13.756817       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 05:17:13.853106       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 05:17:13.853343       1 shared_informer.go:320] Caches are synced for service config
	I0719 05:17:13.857525       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0aab590d1cde5936edd1661a7d38caf0025eba3b3162f6b82e566440bf6e7e58] <==
	E0719 05:17:04.715184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:04.717203       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 05:17:04.717237       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:04.717371       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 05:17:04.717400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:05.646071       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 05:17:05.646124       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0719 05:17:05.728806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 05:17:05.728951       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:05.736911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 05:17:05.736959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:05.912535       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 05:17:05.912595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:05.913220       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 05:17:05.913259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:05.964125       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 05:17:05.964175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:05.977195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 05:17:05.977288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:06.011178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 05:17:06.011267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0719 05:17:06.017340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 05:17:06.018457       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0719 05:17:08.207547       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 05:17:36.531288       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a4907eab3221dfcb85b318ebbf9fc033e78f8bc88189027fdacd0326dbf52b93] <==
	I0719 05:17:55.298845       1 serving.go:386] Generated self-signed cert in-memory
	W0719 05:17:57.371812       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 05:17:57.371848       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 05:17:57.371857       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 05:17:57.371869       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 05:17:57.439127       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0719 05:17:57.439159       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:17:57.457774       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 05:17:57.458978       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 05:17:57.459046       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 05:17:57.459064       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0719 05:17:57.562488       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.589604    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b065617a689e29b19d9702df34e2576-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-678139\" (UID: \"8b065617a689e29b19d9702df34e2576\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.589754    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8cbf0fa47da16b9db337d55f9aa2f800-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-678139\" (UID: \"8cbf0fa47da16b9db337d55f9aa2f800\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.589920    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8cbf0fa47da16b9db337d55f9aa2f800-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-678139\" (UID: \"8cbf0fa47da16b9db337d55f9aa2f800\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.590052    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/54fc7244032af6fbc9cec1931b83b182-etcd-data\") pod \"etcd-kubernetes-upgrade-678139\" (UID: \"54fc7244032af6fbc9cec1931b83b182\") " pod="kube-system/etcd-kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.590183    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b065617a689e29b19d9702df34e2576-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-678139\" (UID: \"8b065617a689e29b19d9702df34e2576\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.590352    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8cbf0fa47da16b9db337d55f9aa2f800-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-678139\" (UID: \"8cbf0fa47da16b9db337d55f9aa2f800\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.590464    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8cbf0fa47da16b9db337d55f9aa2f800-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-678139\" (UID: \"8cbf0fa47da16b9db337d55f9aa2f800\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.590608    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8cbf0fa47da16b9db337d55f9aa2f800-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-678139\" (UID: \"8cbf0fa47da16b9db337d55f9aa2f800\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: E0719 05:17:53.590679    2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-678139?timeout=10s\": dial tcp 192.168.50.182:8443: connect: connection refused" interval="400ms"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.590863    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dab2b8a4a9d85132a00d7897f5bbd2be-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-678139\" (UID: \"dab2b8a4a9d85132a00d7897f5bbd2be\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:53.682710    2584 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: E0719 05:17:53.683871    2584 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.182:8443: connect: connection refused" node="kubernetes-upgrade-678139"
	Jul 19 05:17:53 kubernetes-upgrade-678139 kubelet[2584]: E0719 05:17:53.992820    2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-678139?timeout=10s\": dial tcp 192.168.50.182:8443: connect: connection refused" interval="800ms"
	Jul 19 05:17:54 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:54.096375    2584 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-678139"
	Jul 19 05:17:54 kubernetes-upgrade-678139 kubelet[2584]: E0719 05:17:54.101810    2584 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.182:8443: connect: connection refused" node="kubernetes-upgrade-678139"
	Jul 19 05:17:54 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:54.903923    2584 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-678139"
	Jul 19 05:17:57 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:57.516084    2584 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-678139"
	Jul 19 05:17:57 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:57.516641    2584 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-678139"
	Jul 19 05:17:57 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:57.516763    2584 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 19 05:17:57 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:57.517925    2584 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 19 05:17:58 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:58.359862    2584 apiserver.go:52] "Watching apiserver"
	Jul 19 05:17:58 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:58.382458    2584 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 19 05:17:58 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:58.435858    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b1fa432e-ba36-482f-8ba3-645e19a122d7-tmp\") pod \"storage-provisioner\" (UID: \"b1fa432e-ba36-482f-8ba3-645e19a122d7\") " pod="kube-system/storage-provisioner"
	Jul 19 05:17:58 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:58.436077    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46eed40f-6539-4077-ad05-47338886b953-xtables-lock\") pod \"kube-proxy-4tvdc\" (UID: \"46eed40f-6539-4077-ad05-47338886b953\") " pod="kube-system/kube-proxy-4tvdc"
	Jul 19 05:17:58 kubernetes-upgrade-678139 kubelet[2584]: I0719 05:17:58.436115    2584 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46eed40f-6539-4077-ad05-47338886b953-lib-modules\") pod \"kube-proxy-4tvdc\" (UID: \"46eed40f-6539-4077-ad05-47338886b953\") " pod="kube-system/kube-proxy-4tvdc"
	
	
	==> storage-provisioner [33a236b975cf5c0cedfbc41b28b039dca8cfea3651d8dfd4f890b9d8c1530e3f] <==
	I0719 05:17:59.098217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 05:17:59.126127       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 05:17:59.126208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [f9a45bb5095f84ae8fd6cd4a4a5a6c47895dfef7a6f6aa0fbf720024cfcdd2fd] <==
	I0719 05:17:14.188239       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 05:17:14.209692       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 05:17:14.209898       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 05:17:14.225654       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 05:17:14.226052       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-678139_c16ee546-13c6-44a1-a205-c2c0ccf6edd4!
	I0719 05:17:14.227880       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24110f74-b373-4b76-8c53-fdbfc3fffe0f", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-678139_c16ee546-13c6-44a1-a205-c2c0ccf6edd4 became leader
	I0719 05:17:14.327642       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-678139_c16ee546-13c6-44a1-a205-c2c0ccf6edd4!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0719 05:18:01.009743  178068 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19302-122995/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-678139 -n kubernetes-upgrade-678139
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-678139 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-678139" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-678139
--- FAIL: TestKubernetesUpgrade (414.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.056s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
E0719 05:36:36.834901  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.237:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.237:8443: connect: connection refused
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (22m22s)
	TestStartStop (25m3s)
	TestStartStop/group/default-k8s-diff-port (18m6s)
	TestStartStop/group/default-k8s-diff-port/serial (18m6s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (4m56s)
	TestStartStop/group/embed-certs (19m11s)
	TestStartStop/group/embed-certs/serial (19m11s)
	TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5m13s)
	TestStartStop/group/no-preload (19m38s)
	TestStartStop/group/no-preload/serial (19m38s)
	TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4m8s)
	TestStartStop/group/old-k8s-version (20m24s)
	TestStartStop/group/old-k8s-version/serial (20m24s)
	TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (1m42s)

                                                
                                                
goroutine 3747 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00086cb60, 0xc000b1bbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006a2420, {0x49ce120, 0x2b, 0x2b}, {0x26b4524?, 0xc000759b00?, 0x4a8aa60?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00071a000)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00071a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000415d80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1775 [chan receive, 26 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001a3cb60, 0x3138860)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1760
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 39 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 38
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 1708 [chan receive, 22 minutes]:
testing.(*T).Run(0xc001a3c000, {0x2659ba9?, 0x55127c?}, 0xc0022085a0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001a3c000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc001a3c000, 0x3138640)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2398 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2397
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2789 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b7be0, 0xc0002b9180}, {0x36ab2c0, 0xc00164a040}, 0x1, 0x0, 0xc000b1bc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b7be0?, 0xc0007da1c0?}, 0x3b9aca00, 0xc000b17e10?, 0x1, 0xc000b17c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b7be0, 0xc0007da1c0}, 0xc000471040, {0xc0022e2918, 0x11}, {0x267fbdf, 0x14}, {0x269779f, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36b7be0, 0xc0007da1c0}, 0xc000471040, {0xc0022e2918, 0x11}, {0x2664d73?, 0xc001931760?}, {0x551133?, 0x4a170f?}, {0xc000868000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000471040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000471040, 0xc0014c6a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2404
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2501 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000826b80, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2523
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2679 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b7be0, 0xc00046f9d0}, {0x36ab2c0, 0xc0018b0f20}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b7be0?, 0xc000190380?}, 0x3b9aca00, 0xc000b1be10?, 0x1, 0xc000b1bc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b7be0, 0xc000190380}, 0xc0018fc680, {0xc00217b460, 0x1c}, {0x267fbdf, 0x14}, {0x269779f, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36b7be0, 0xc000190380}, 0xc0018fc680, {0xc00217b460, 0x1c}, {0x2682ad9?, 0xc0012fd760?}, {0x551133?, 0x4a170f?}, {0xc0012f6900, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0018fc680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0018fc680, 0xc0007d8380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2470
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1977 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006845f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013da340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013da340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013da340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013da340, 0xc0001c5b00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1955
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2470 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001a3d860, {0x268595b?, 0x60400000004?}, 0xc0007d8380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001a3d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001a3d860, 0xc0007d9500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1858
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3225 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001be7140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3224
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2500 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001966ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2523
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2404 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0013db1e0, {0x268595b?, 0x60400000004?}, 0xc0014c6a00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0013db1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0013db1e0, 0xc001996380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1859
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2361 [chan receive]:
testing.(*T).Run(0xc0013da000, {0x268595b?, 0x60400000004?}, 0xc0001c5a00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0013da000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0013da000, 0xc001996080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1776
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1980 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006845f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013daea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013daea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013daea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013daea0, 0xc0001c5c80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1955
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 437 [chan receive, 76 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000826b40, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 402
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2463 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2462
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 262 [IO wait, 78 minutes]:
internal/poll.runtime_pollWait(0x7fb8c4435730, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000990000)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000990000)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000b8a340)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000b8a340)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0004fc0f0, {0x36aac00, 0xc000b8a340})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0004fc0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x592e44?, 0xc0013d8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 259
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 873 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc001976900)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 870
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3235 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008270d0, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21477c0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001be6f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000827100)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021685c0, {0x3693d20, 0xc0016f2810}, 0x1, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021685c0, 0x3b9aca00, 0x0, 0x1, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3226
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 417 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc00127f200, 0xc00186a1e0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 375
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 424 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000826ad0, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21477c0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000b9b560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000826b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00084dbc0, {0x3693d20, 0xc000bb1170}, 0x1, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00084dbc0, 0x3b9aca00, 0x0, 0x1, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3237 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3236
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2640 [IO wait]:
internal/poll.runtime_pollWait(0x7fb8c4435350, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0007d9c00?, 0xc0017e7800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0007d9c00, {0xc0017e7800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0007d9c00, {0xc0017e7800?, 0xc000466b40?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc00220e160, {0xc0017e7800?, 0xc0017e785f?, 0x6f?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc00237a510, {0xc0017e7800?, 0x0?, 0xc00237a510?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00188b7b0, {0x36944c0, 0xc00237a510})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00188b508, {0x7fb8bc75d1e8, 0xc001c46cf0}, 0xc00128e980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00188b508, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00188b508, {0xc001359000, 0x1000, 0xc001784e00?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00253ce40, {0xc0007730e0, 0x9, 0x4989c20?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36929a0, 0xc00253ce40}, {0xc0007730e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0007730e0, 0x9, 0x128edc0?}, {0x36929a0?, 0xc00253ce40?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0007730a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00128efa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000854600)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2639
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 436 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b9b680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 402
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 872 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc001976900)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 870
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 1777 [chan receive, 26 minutes]:
testing.(*testContext).waitParallel(0xc0006845f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001a3d040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001a3d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a3d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001a3d040, 0xc000826e40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1775
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2652 [IO wait]:
internal/poll.runtime_pollWait(0x7fb8c4435160, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000415c80?, 0xc0013bc000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000415c80, {0xc0013bc000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000415c80, {0xc0013bc000?, 0x7fb8bc756f90?, 0xc00237a4b0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0018d60a0, {0xc0013bc000?, 0xc0013ca938?, 0x41469b?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc00237a4b0, {0xc0013bc000?, 0x0?, 0xc00237a4b0?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0015189b0, {0x36944c0, 0xc00237a4b0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001518708, {0x36938a0, 0xc0018d60a0}, 0xc0013ca980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001518708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001518708, {0xc000705000, 0x1000, 0xc001784e00?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00192d020, {0xc00215e4a0, 0x9, 0x4989c20?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36929a0, 0xc00192d020}, {0xc00215e4a0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00215e4a0, 0x9, 0x13cadc0?}, {0x36929a0?, 0xc00192d020?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00215e460)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0013cafa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000208d80)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2651
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 1979 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006845f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013dad00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013dad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013dad00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013dad00, 0xc0001c5c00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1955
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2396 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00194a490, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21477c0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00192d1a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00194a4c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002528280, {0x3693d20, 0xc000bd6240}, 0x1, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002528280, 0x3b9aca00, 0x0, 0x1, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2436
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2436 [chan receive, 19 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00194a4c0, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2392
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2397 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b7da0, 0xc0009803c0}, 0xc000113750, 0xc0013c8f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b7da0, 0xc0009803c0}, 0xa0?, 0xc000113750, 0xc000113798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b7da0?, 0xc0009803c0?}, 0xc00086c9c0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0001137d0?, 0x592e44?, 0xc0009814a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2436
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2318 [chan receive, 5 minutes]:
testing.(*T).Run(0xc0013db520, {0x268595b?, 0x60400000004?}, 0xc000414c80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0013db520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0013db520, 0xc001996280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1861
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1861 [chan receive, 19 minutes]:
testing.(*T).Run(0xc001a3d6c0, {0x265b154?, 0x0?}, 0xc001996280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a3d6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001a3d6c0, 0xc000827080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1775
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2628 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b7be0, 0xc0004eb730}, {0x36ab2c0, 0xc00196edc0}, 0x1, 0x0, 0xc001353c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b7be0?, 0xc0007da000?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b7be0, 0xc0007da000}, 0xc000470d00, {0xc0022e2108, 0x12}, {0x267fbdf, 0x14}, {0x269779f, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36b7be0, 0xc0007da000}, 0xc000470d00, {0xc0022e2108, 0x12}, {0x2666f81?, 0xc0012ff760?}, {0x551133?, 0x4a170f?}, {0xc000868700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000470d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000470d00, 0xc000414c80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2318
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 474 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001353c80, 0xc00197e000)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 473
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1928 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006845f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d81a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d81a0, 0xc000502000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1955
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1858 [chan receive, 19 minutes]:
testing.(*T).Run(0xc001a3d1e0, {0x265b154?, 0x0?}, 0xc0007d9500)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a3d1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001a3d1e0, 0xc000826f00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1775
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 425 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b7da0, 0xc0009803c0}, 0xc000095750, 0xc0000a6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b7da0, 0xc0009803c0}, 0x0?, 0xc000095750, 0xc000095798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b7da0?, 0xc0009803c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 426 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 425
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3224 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36b7be0, 0xc00064b570}, {0x36ab2c0, 0xc00178afe0}, 0x1, 0x0, 0xc000b17c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36b7be0?, 0xc0007da0e0?}, 0x3b9aca00, 0xc000b17e10?, 0x1, 0xc000b17c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36b7be0, 0xc0007da0e0}, 0xc0018fc340, {0xc0022e2720, 0x16}, {0x267fbdf, 0x14}, {0x269779f, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36b7be0, 0xc0007da0e0}, 0xc0018fc340, {0xc0022e2720, 0x16}, {0x2670f1d?, 0xc001934760?}, {0x551133?, 0x4a170f?}, {0xc000be4300, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0018fc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0018fc340, 0xc0001c5a00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2361
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1929 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006845f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d8340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d8340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d8340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d8340, 0xc000502100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1955
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1776 [chan receive, 21 minutes]:
testing.(*T).Run(0xc001a3cea0, {0x265b154?, 0x0?}, 0xc001996080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a3cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001a3cea0, 0xc000826e00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1775
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2462 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b7da0, 0xc0009803c0}, 0xc001936f50, 0xc0013cbf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b7da0, 0xc0009803c0}, 0xd0?, 0xc001936f50, 0xc001936f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b7da0?, 0xc0009803c0?}, 0x99b656?, 0xc001365080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000be1750?, 0xc0018b96d8?, 0xc001936fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2501
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 694 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001fc600, 0xc000060f60)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 693
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1956 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006845f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00086d040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00086d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00086d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc00086d040, 0xc001996000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1955
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3226 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000827100, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3224
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2774 [IO wait]:
internal/poll.runtime_pollWait(0x7fb8c44349a0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0014c7700?, 0xc000b97800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014c7700, {0xc000b97800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0014c7700, {0xc000b97800?, 0xc001471b80?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0018d6b78, {0xc000b97800?, 0xc000b9785f?, 0x6f?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001bd1bd8, {0xc000b97800?, 0x0?, 0xc001bd1bd8?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc001518d30, {0x36944c0, 0xc001bd1bd8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc001518a88, {0x7fb8bc75d1e8, 0xc0006a2108}, 0xc001450980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc001518a88, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc001518a88, {0xc0015fe000, 0x1000, 0xc001785880?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001467da0, {0xc00215f700, 0x9, 0x4989c20?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36929a0, 0xc001467da0}, {0xc00215f700, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00215f700, 0x9, 0x1450dc0?}, {0x36929a0?, 0xc001467da0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00215f6c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001450fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001479b00)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2773
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 2461 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000826a50, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x21477c0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019669c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000826b80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0002183a0, {0x3693d20, 0xc002380180}, 0x1, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002183a0, 0x3b9aca00, 0x0, 0x1, 0xc0009803c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2501
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1978 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc0006845f0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013da9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013da9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013da9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013da9c0, 0xc0001c5b80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1955
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1955 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00086cea0, 0xc0022085a0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1708
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1760 [chan receive, 26 minutes]:
testing.(*T).Run(0xc0013da1a0, {0x2659ba9?, 0x551133?}, 0x3138860)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0013da1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0013da1a0, 0x3138688)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1859 [chan receive, 21 minutes]:
testing.(*T).Run(0xc001a3d380, {0x265b154?, 0x0?}, 0xc001996380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001a3d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001a3d380, 0xc000826f40)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1775
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3236 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b7da0, 0xc0009803c0}, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b7da0, 0xc0009803c0}, 0xd0?, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b7da0?, 0xc0009803c0?}, 0x99b656?, 0xc0014d5680?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000967d0?, 0x592e44?, 0xc00191a160?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3226
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2435 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00192d380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2392
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                    

Test pass (176/221)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.92
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 16.57
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 14.38
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.57
31 TestOffline 62.62
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
37 TestCertOptions 45.52
38 TestCertExpiration 255.03
40 TestForceSystemdFlag 55.39
41 TestForceSystemdEnv 48.35
43 TestKVMDriverInstallOrUpdate 4.2
47 TestErrorSpam/setup 38.04
48 TestErrorSpam/start 0.35
49 TestErrorSpam/status 0.71
50 TestErrorSpam/pause 1.47
51 TestErrorSpam/unpause 1.49
52 TestErrorSpam/stop 5.05
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 65.02
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 31.15
59 TestFunctional/serial/KubeContext 0.05
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.63
64 TestFunctional/serial/CacheCmd/cache/add_local 2.36
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.97
69 TestFunctional/serial/CacheCmd/cache/delete 0.09
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 37.5
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.31
75 TestFunctional/serial/LogsFileCmd 1.31
76 TestFunctional/serial/InvalidService 5.03
78 TestFunctional/parallel/ConfigCmd 0.35
79 TestFunctional/parallel/DashboardCmd 42.6
80 TestFunctional/parallel/DryRun 0.3
81 TestFunctional/parallel/InternationalLanguage 0.15
82 TestFunctional/parallel/StatusCmd 1.23
86 TestFunctional/parallel/ServiceCmdConnect 7.75
87 TestFunctional/parallel/AddonsCmd 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 26.48
90 TestFunctional/parallel/SSHCmd 0.45
91 TestFunctional/parallel/CpCmd 1.32
92 TestFunctional/parallel/MySQL 24.04
93 TestFunctional/parallel/FileSync 0.22
94 TestFunctional/parallel/CertSync 1.41
98 TestFunctional/parallel/NodeLabels 0.1
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
102 TestFunctional/parallel/License 1.04
103 TestFunctional/parallel/ServiceCmd/DeployApp 12.2
104 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
105 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
106 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
107 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
108 TestFunctional/parallel/ProfileCmd/profile_list 0.43
109 TestFunctional/parallel/MountCmd/any-port 20.59
110 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
111 TestFunctional/parallel/ServiceCmd/List 1.07
112 TestFunctional/parallel/ServiceCmd/JSONOutput 0.89
113 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
114 TestFunctional/parallel/ServiceCmd/Format 0.48
115 TestFunctional/parallel/ServiceCmd/URL 0.31
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.74
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
122 TestFunctional/parallel/ImageCommands/ImageBuild 4.71
123 TestFunctional/parallel/ImageCommands/Setup 1.77
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.57
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.93
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.8
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.13
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
129 TestFunctional/parallel/MountCmd/specific-port 1.75
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
142 TestFunctional/delete_echo-server_images 0.03
143 TestFunctional/delete_my-image_image 0.01
144 TestFunctional/delete_minikube_cached_images 0.01
148 TestMultiControlPlane/serial/StartCluster 213.06
149 TestMultiControlPlane/serial/DeployApp 57.91
150 TestMultiControlPlane/serial/PingHostFromPods 1.2
151 TestMultiControlPlane/serial/AddWorkerNode 54.87
152 TestMultiControlPlane/serial/NodeLabels 0.07
153 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.56
154 TestMultiControlPlane/serial/CopyFile 12.76
156 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
158 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
160 TestMultiControlPlane/serial/DeleteSecondaryNode 17.19
161 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
163 TestMultiControlPlane/serial/RestartCluster 324.64
164 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
165 TestMultiControlPlane/serial/AddSecondaryNode 78.16
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.55
170 TestJSONOutput/start/Command 58.7
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.69
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.59
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 6.64
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.19
198 TestMainNoArgs 0.04
199 TestMinikubeProfile 81.7
202 TestMountStart/serial/StartWithMountFirst 27.6
203 TestMountStart/serial/VerifyMountFirst 0.38
204 TestMountStart/serial/StartWithMountSecond 26.27
205 TestMountStart/serial/VerifyMountSecond 0.36
206 TestMountStart/serial/DeleteFirst 0.89
207 TestMountStart/serial/VerifyMountPostDelete 0.37
208 TestMountStart/serial/Stop 1.27
209 TestMountStart/serial/RestartStopped 19.9
210 TestMountStart/serial/VerifyMountPostStop 0.37
213 TestMultiNode/serial/FreshStart2Nodes 114.5
214 TestMultiNode/serial/DeployApp2Nodes 5.27
215 TestMultiNode/serial/PingHostFrom2Pods 0.79
216 TestMultiNode/serial/AddNode 46.29
217 TestMultiNode/serial/MultiNodeLabels 0.06
218 TestMultiNode/serial/ProfileList 0.22
219 TestMultiNode/serial/CopyFile 7.03
220 TestMultiNode/serial/StopNode 2.25
221 TestMultiNode/serial/StartAfterStop 39.15
223 TestMultiNode/serial/DeleteNode 2.11
225 TestMultiNode/serial/RestartMultiNode 187.15
226 TestMultiNode/serial/ValidateNameConflict 40.87
233 TestScheduledStopUnix 111.35
237 TestRunningBinaryUpgrade 223.93
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
243 TestNoKubernetes/serial/StartWithK8s 94.43
252 TestPause/serial/Start 95.91
253 TestNoKubernetes/serial/StartWithStopK8s 43.02
254 TestNoKubernetes/serial/Start 26.55
255 TestPause/serial/SecondStartNoReconfiguration 61.8
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
257 TestNoKubernetes/serial/ProfileList 1.29
258 TestNoKubernetes/serial/Stop 1.27
259 TestNoKubernetes/serial/StartNoArgs 24.96
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
261 TestPause/serial/Pause 0.78
262 TestPause/serial/VerifyStatus 0.27
263 TestPause/serial/Unpause 0.78
264 TestPause/serial/PauseAgain 0.97
265 TestPause/serial/DeletePaused 1.63
269 TestPause/serial/VerifyDeletedResources 0.41
278 TestStoppedBinaryUpgrade/Setup 2.3
279 TestStoppedBinaryUpgrade/Upgrade 142.43
282 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
x
+
TestDownloadOnly/v1.20.0/json-events (28.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-637301 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-637301 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (28.916322117s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (28.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-637301
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-637301: exit status 85 (59.179065ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-637301 | jenkins | v1.33.1 | 19 Jul 24 03:37 UTC |          |
	|         | -p download-only-637301        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:37:14
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:37:14.922110  130182 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:37:14.922385  130182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:37:14.922395  130182 out.go:304] Setting ErrFile to fd 2...
	I0719 03:37:14.922401  130182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:37:14.922578  130182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	W0719 03:37:14.922721  130182 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19302-122995/.minikube/config/config.json: open /home/jenkins/minikube-integration/19302-122995/.minikube/config/config.json: no such file or directory
	I0719 03:37:14.923329  130182 out.go:298] Setting JSON to true
	I0719 03:37:14.924217  130182 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4778,"bootTime":1721355457,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:37:14.924313  130182 start.go:139] virtualization: kvm guest
	I0719 03:37:14.926671  130182 out.go:97] [download-only-637301] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0719 03:37:14.926793  130182 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 03:37:14.926832  130182 notify.go:220] Checking for updates...
	I0719 03:37:14.928034  130182 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:37:14.929362  130182 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:37:14.930607  130182 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 03:37:14.931801  130182 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 03:37:14.932976  130182 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 03:37:14.935310  130182 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:37:14.935558  130182 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:37:15.035850  130182 out.go:97] Using the kvm2 driver based on user configuration
	I0719 03:37:15.035908  130182 start.go:297] selected driver: kvm2
	I0719 03:37:15.035927  130182 start.go:901] validating driver "kvm2" against <nil>
	I0719 03:37:15.036354  130182 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:37:15.036507  130182 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 03:37:15.052367  130182 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 03:37:15.052453  130182 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:37:15.052908  130182 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0719 03:37:15.053121  130182 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:37:15.053170  130182 cni.go:84] Creating CNI manager for ""
	I0719 03:37:15.053183  130182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 03:37:15.053192  130182 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 03:37:15.053264  130182 start.go:340] cluster config:
	{Name:download-only-637301 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-637301 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:37:15.053489  130182 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:37:15.055344  130182 out.go:97] Downloading VM boot image ...
	I0719 03:37:15.055399  130182 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 03:37:25.397172  130182 out.go:97] Starting "download-only-637301" primary control-plane node in "download-only-637301" cluster
	I0719 03:37:25.397208  130182 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 03:37:25.498491  130182 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0719 03:37:25.498535  130182 cache.go:56] Caching tarball of preloaded images
	I0719 03:37:25.498689  130182 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0719 03:37:25.500381  130182 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 03:37:25.500401  130182 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 03:37:25.601688  130182 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-637301 host does not exist
	  To start a cluster, run: "minikube start -p download-only-637301"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-637301
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (16.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-136081 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-136081 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.572194316s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (16.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-136081
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-136081: exit status 85 (62.364606ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-637301 | jenkins | v1.33.1 | 19 Jul 24 03:37 UTC |                     |
	|         | -p download-only-637301        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	| delete  | -p download-only-637301        | download-only-637301 | jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	| start   | -o=json --download-only        | download-only-136081 | jenkins | v1.33.1 | 19 Jul 24 03:37 UTC |                     |
	|         | -p download-only-136081        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:37:44
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:37:44.153113  130454 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:37:44.153323  130454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:37:44.153337  130454 out.go:304] Setting ErrFile to fd 2...
	I0719 03:37:44.153345  130454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:37:44.153740  130454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 03:37:44.154348  130454 out.go:298] Setting JSON to true
	I0719 03:37:44.155169  130454 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4807,"bootTime":1721355457,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:37:44.155223  130454 start.go:139] virtualization: kvm guest
	I0719 03:37:44.157538  130454 out.go:97] [download-only-136081] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 03:37:44.157672  130454 notify.go:220] Checking for updates...
	I0719 03:37:44.158960  130454 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:37:44.160141  130454 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:37:44.161344  130454 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 03:37:44.162589  130454 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 03:37:44.163749  130454 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 03:37:44.165785  130454 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:37:44.166010  130454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:37:44.197607  130454 out.go:97] Using the kvm2 driver based on user configuration
	I0719 03:37:44.197633  130454 start.go:297] selected driver: kvm2
	I0719 03:37:44.197640  130454 start.go:901] validating driver "kvm2" against <nil>
	I0719 03:37:44.197946  130454 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:37:44.198072  130454 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 03:37:44.214035  130454 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 03:37:44.214082  130454 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:37:44.214584  130454 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0719 03:37:44.214737  130454 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:37:44.214794  130454 cni.go:84] Creating CNI manager for ""
	I0719 03:37:44.214806  130454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 03:37:44.214813  130454 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 03:37:44.214877  130454 start.go:340] cluster config:
	{Name:download-only-136081 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-136081 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:37:44.214982  130454 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:37:44.216557  130454 out.go:97] Starting "download-only-136081" primary control-plane node in "download-only-136081" cluster
	I0719 03:37:44.216580  130454 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 03:37:44.627326  130454 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 03:37:44.627373  130454 cache.go:56] Caching tarball of preloaded images
	I0719 03:37:44.627540  130454 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0719 03:37:44.629442  130454 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 03:37:44.629457  130454 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0719 03:37:44.740883  130454 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0719 03:37:59.086239  130454 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0719 03:37:59.086338  130454 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-136081 host does not exist
	  To start a cluster, run: "minikube start -p download-only-136081"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-136081
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (14.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-577536 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-577536 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.380558564s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (14.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-577536
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-577536: exit status 85 (57.024145ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-637301 | jenkins | v1.33.1 | 19 Jul 24 03:37 UTC |                     |
	|         | -p download-only-637301             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	| delete  | -p download-only-637301             | download-only-637301 | jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	| start   | -o=json --download-only             | download-only-136081 | jenkins | v1.33.1 | 19 Jul 24 03:37 UTC |                     |
	|         | -p download-only-136081             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:38 UTC |
	| delete  | -p download-only-136081             | download-only-136081 | jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:38 UTC |
	| start   | -o=json --download-only             | download-only-577536 | jenkins | v1.33.1 | 19 Jul 24 03:38 UTC |                     |
	|         | -p download-only-577536             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:38:01
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:38:01.050748  130675 out.go:291] Setting OutFile to fd 1 ...
	I0719 03:38:01.051003  130675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:38:01.051013  130675 out.go:304] Setting ErrFile to fd 2...
	I0719 03:38:01.051019  130675 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:38:01.051202  130675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 03:38:01.051776  130675 out.go:298] Setting JSON to true
	I0719 03:38:01.052682  130675 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4824,"bootTime":1721355457,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 03:38:01.052736  130675 start.go:139] virtualization: kvm guest
	I0719 03:38:01.054739  130675 out.go:97] [download-only-577536] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 03:38:01.054891  130675 notify.go:220] Checking for updates...
	I0719 03:38:01.056127  130675 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:38:01.057336  130675 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:38:01.058554  130675 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 03:38:01.059632  130675 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 03:38:01.060814  130675 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0719 03:38:01.062880  130675 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:38:01.063157  130675 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:38:01.094089  130675 out.go:97] Using the kvm2 driver based on user configuration
	I0719 03:38:01.094114  130675 start.go:297] selected driver: kvm2
	I0719 03:38:01.094126  130675 start.go:901] validating driver "kvm2" against <nil>
	I0719 03:38:01.094450  130675 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:38:01.094524  130675 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19302-122995/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0719 03:38:01.109753  130675 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0719 03:38:01.109796  130675 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:38:01.110284  130675 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0719 03:38:01.110434  130675 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:38:01.110495  130675 cni.go:84] Creating CNI manager for ""
	I0719 03:38:01.110507  130675 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0719 03:38:01.110517  130675 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 03:38:01.110574  130675 start.go:340] cluster config:
	{Name:download-only-577536 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-577536 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:38:01.110662  130675 iso.go:125] acquiring lock: {Name:mk610026cb7ac7ecfa6440021a031d3b49160f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:38:01.112683  130675 out.go:97] Starting "download-only-577536" primary control-plane node in "download-only-577536" cluster
	I0719 03:38:01.112727  130675 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 03:38:01.986845  130675 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0719 03:38:01.986894  130675 cache.go:56] Caching tarball of preloaded images
	I0719 03:38:01.987055  130675 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0719 03:38:01.989019  130675 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 03:38:01.989039  130675 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0719 03:38:02.095985  130675 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19302-122995/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-577536 host does not exist
	  To start a cluster, run: "minikube start -p download-only-577536"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-577536
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-466954 --alsologtostderr --binary-mirror http://127.0.0.1:35903 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-466954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-466954
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (62.62s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-545502 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-545502 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.815443923s)
helpers_test.go:175: Cleaning up "offline-crio-545502" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-545502
--- PASS: TestOffline (62.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-513705
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-513705: exit status 85 (50.799519ms)

                                                
                                                
-- stdout --
	* Profile "addons-513705" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-513705"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-513705
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-513705: exit status 85 (51.515986ms)

                                                
                                                
-- stdout --
	* Profile "addons-513705" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-513705"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (45.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-423966 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-423966 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.292429398s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-423966 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-423966 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-423966 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-423966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-423966
--- PASS: TestCertOptions (45.52s)

                                                
                                    
x
+
TestCertExpiration (255.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-655634 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-655634 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (43.361427221s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-655634 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-655634 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (30.635479458s)
helpers_test.go:175: Cleaning up "cert-expiration-655634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-655634
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-655634: (1.027102403s)
--- PASS: TestCertExpiration (255.03s)

                                                
                                    
x
+
TestForceSystemdFlag (55.39s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-670923 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-670923 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.132244881s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-670923 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-670923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-670923
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-670923: (1.058272034s)
--- PASS: TestForceSystemdFlag (55.39s)

                                                
                                    
x
+
TestForceSystemdEnv (48.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-298141 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-298141 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.356011427s)
helpers_test.go:175: Cleaning up "force-systemd-env-298141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-298141
--- PASS: TestForceSystemdEnv (48.35s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.20s)

                                                
                                    
x
+
TestErrorSpam/setup (38.04s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-498664 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-498664 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-498664 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-498664 --driver=kvm2  --container-runtime=crio: (38.038145182s)
--- PASS: TestErrorSpam/setup (38.04s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (5.05s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 stop: (1.613099069s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 stop: (1.528979998s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-498664 --log_dir /tmp/nospam-498664 stop: (1.909762795s)
--- PASS: TestErrorSpam/stop (5.05s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19302-122995/.minikube/files/etc/test/nested/copy/130170/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554179 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-554179 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m5.024093274s)
--- PASS: TestFunctional/serial/StartWithProxy (65.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.15s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554179 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-554179 --alsologtostderr -v=8: (31.144909751s)
functional_test.go:659: soft start took 31.145519307s for "functional-554179" cluster.
--- PASS: TestFunctional/serial/SoftStart (31.15s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-554179 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 cache add registry.k8s.io/pause:3.1: (1.531156494s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 cache add registry.k8s.io/pause:3.3: (1.561209872s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 cache add registry.k8s.io/pause:latest: (1.540753268s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-554179 /tmp/TestFunctionalserialCacheCmdcacheadd_local4285103094/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 cache add minikube-local-cache-test:functional-554179
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 cache add minikube-local-cache-test:functional-554179: (2.076372042s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 cache delete minikube-local-cache-test:functional-554179
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-554179
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.127638ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 cache reload: (1.300980335s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.97s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 kubectl -- --context functional-554179 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-554179 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554179 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-554179 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.497355752s)
functional_test.go:757: restart took 37.497468348s for "functional-554179" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-554179 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 logs: (1.312403295s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 logs --file /tmp/TestFunctionalserialLogsFileCmd15984106/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 logs --file /tmp/TestFunctionalserialLogsFileCmd15984106/001/logs.txt: (1.31388772s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-554179 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-554179
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-554179: exit status 115 (261.013842ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.154:30832 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-554179 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-554179 delete -f testdata/invalidsvc.yaml: (1.580529585s)
--- PASS: TestFunctional/serial/InvalidService (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 config get cpus: exit status 14 (72.453194ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 config get cpus: exit status 14 (48.601558ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (42.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-554179 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-554179 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 143276: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (42.60s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554179 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-554179 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.361203ms)

                                                
                                                
-- stdout --
	* [functional-554179] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:21:39.640881  143045 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:21:39.640997  143045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:21:39.641016  143045 out.go:304] Setting ErrFile to fd 2...
	I0719 04:21:39.641028  143045 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:21:39.641281  143045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:21:39.641820  143045 out.go:298] Setting JSON to false
	I0719 04:21:39.642819  143045 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7443,"bootTime":1721355457,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 04:21:39.642875  143045 start.go:139] virtualization: kvm guest
	I0719 04:21:39.645183  143045 out.go:177] * [functional-554179] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0719 04:21:39.646420  143045 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:21:39.646463  143045 notify.go:220] Checking for updates...
	I0719 04:21:39.648739  143045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:21:39.649875  143045 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:21:39.650915  143045 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:21:39.652066  143045 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 04:21:39.653292  143045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:21:39.654854  143045 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:21:39.655255  143045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:21:39.655308  143045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:21:39.671221  143045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I0719 04:21:39.671752  143045 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:21:39.672405  143045 main.go:141] libmachine: Using API Version  1
	I0719 04:21:39.672433  143045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:21:39.672887  143045 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:21:39.673156  143045 main.go:141] libmachine: (functional-554179) Calling .DriverName
	I0719 04:21:39.673479  143045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:21:39.673921  143045 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:21:39.673965  143045 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:21:39.689477  143045 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I0719 04:21:39.689955  143045 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:21:39.690505  143045 main.go:141] libmachine: Using API Version  1
	I0719 04:21:39.690536  143045 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:21:39.691002  143045 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:21:39.691221  143045 main.go:141] libmachine: (functional-554179) Calling .DriverName
	I0719 04:21:39.725971  143045 out.go:177] * Using the kvm2 driver based on existing profile
	I0719 04:21:39.727319  143045 start.go:297] selected driver: kvm2
	I0719 04:21:39.727352  143045 start.go:901] validating driver "kvm2" against &{Name:functional-554179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-554179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:21:39.727494  143045 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:21:39.729760  143045 out.go:177] 
	W0719 04:21:39.731106  143045 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 04:21:39.732512  143045 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554179 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-554179 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-554179 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (147.943261ms)

                                                
                                                
-- stdout --
	* [functional-554179] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:21:39.947782  143116 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:21:39.947907  143116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:21:39.947918  143116 out.go:304] Setting ErrFile to fd 2...
	I0719 04:21:39.947925  143116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:21:39.948355  143116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:21:39.948997  143116 out.go:298] Setting JSON to false
	I0719 04:21:39.950416  143116 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7443,"bootTime":1721355457,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0719 04:21:39.950505  143116 start.go:139] virtualization: kvm guest
	I0719 04:21:39.952974  143116 out.go:177] * [functional-554179] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0719 04:21:39.954369  143116 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:21:39.954410  143116 notify.go:220] Checking for updates...
	I0719 04:21:39.956714  143116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:21:39.958243  143116 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	I0719 04:21:39.959748  143116 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	I0719 04:21:39.961233  143116 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0719 04:21:39.962556  143116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:21:39.964377  143116 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:21:39.965022  143116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:21:39.965117  143116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:21:39.980972  143116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I0719 04:21:39.981429  143116 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:21:39.982085  143116 main.go:141] libmachine: Using API Version  1
	I0719 04:21:39.982144  143116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:21:39.982616  143116 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:21:39.982858  143116 main.go:141] libmachine: (functional-554179) Calling .DriverName
	I0719 04:21:39.983173  143116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:21:39.983587  143116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:21:39.983634  143116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:21:39.999035  143116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34155
	I0719 04:21:39.999493  143116 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:21:39.999962  143116 main.go:141] libmachine: Using API Version  1
	I0719 04:21:39.999991  143116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:21:40.000417  143116 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:21:40.000638  143116 main.go:141] libmachine: (functional-554179) Calling .DriverName
	I0719 04:21:40.034393  143116 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0719 04:21:40.035627  143116 start.go:297] selected driver: kvm2
	I0719 04:21:40.035663  143116 start.go:901] validating driver "kvm2" against &{Name:functional-554179 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-554179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.154 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:21:40.035815  143116 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:21:40.038267  143116 out.go:177] 
	W0719 04:21:40.039538  143116 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 04:21:40.040751  143116 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-554179 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-554179 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-xxv5f" [0d626aba-1508-4cf9-a2e1-f176c0477a14] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-xxv5f" [0d626aba-1508-4cf9-a2e1-f176c0477a14] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003542149s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.154:31783
functional_test.go:1671: http://192.168.39.154:31783: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-xxv5f

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.154:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.154:31783
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8e59d2e1-e0b4-4113-b6bf-842d10aa7028] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004966587s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-554179 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-554179 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-554179 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-554179 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [947289a4-6673-4806-8b0b-79a5c6ab40f6] Pending
helpers_test.go:344: "sp-pod" [947289a4-6673-4806-8b0b-79a5c6ab40f6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [947289a4-6673-4806-8b0b-79a5c6ab40f6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.005329965s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-554179 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-554179 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-554179 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f1e77520-f386-42e8-adca-2af3bc0ab98c] Pending
2024/07/19 04:22:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [f1e77520-f386-42e8-adca-2af3bc0ab98c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f1e77520-f386-42e8-adca-2af3bc0ab98c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003786677s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-554179 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh -n functional-554179 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 cp functional-554179:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2324210066/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh -n functional-554179 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh -n functional-554179 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-554179 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-qzff5" [69ecfea0-4ba9-497e-bee2-c4759bca7b8e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-qzff5" [69ecfea0-4ba9-497e-bee2-c4759bca7b8e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.010943133s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-554179 exec mysql-64454c8b5c-qzff5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-554179 exec mysql-64454c8b5c-qzff5 -- mysql -ppassword -e "show databases;": exit status 1 (210.918103ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-554179 exec mysql-64454c8b5c-qzff5 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-554179 exec mysql-64454c8b5c-qzff5 -- mysql -ppassword -e "show databases;": exit status 1 (154.592278ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-554179 exec mysql-64454c8b5c-qzff5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.04s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/130170/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo cat /etc/test/nested/copy/130170/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/130170.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo cat /etc/ssl/certs/130170.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/130170.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo cat /usr/share/ca-certificates/130170.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1301702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo cat /etc/ssl/certs/1301702.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1301702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo cat /usr/share/ca-certificates/1301702.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-554179 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 ssh "sudo systemctl is-active docker": exit status 1 (283.055617ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 ssh "sudo systemctl is-active containerd": exit status 1 (226.1276ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2284: (dbg) Done: out/minikube-linux-amd64 license: (1.043826531s)
--- PASS: TestFunctional/parallel/License (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-554179 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-554179 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-bsb5l" [8e7d7d72-5006-4251-9c68-9556477e1f12] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-bsb5l" [8e7d7d72-5006-4251-9c68-9556477e1f12] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003945862s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "371.171219ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "54.910803ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdany-port207044001/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721362899190505965" to /tmp/TestFunctionalparallelMountCmdany-port207044001/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721362899190505965" to /tmp/TestFunctionalparallelMountCmdany-port207044001/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721362899190505965" to /tmp/TestFunctionalparallelMountCmdany-port207044001/001/test-1721362899190505965
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.913871ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 04:21 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 04:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 04:21 test-1721362899190505965
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh cat /mount-9p/test-1721362899190505965
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-554179 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [6027eb66-019a-4ad1-a551-f118fda50b33] Pending
helpers_test.go:344: "busybox-mount" [6027eb66-019a-4ad1-a551-f118fda50b33] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [6027eb66-019a-4ad1-a551-f118fda50b33] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [6027eb66-019a-4ad1-a551-f118fda50b33] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.00967155s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-554179 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdany-port207044001/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "343.013623ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "51.56702ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 service list: (1.072805183s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 service list -o json
functional_test.go:1490: Took "894.187068ms" to run "out/minikube-linux-amd64 -p functional-554179 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.154:30205
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.154:30205
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554179 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-554179
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-554179
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554179 image ls --format short --alsologtostderr:
I0719 04:22:04.083154  144649 out.go:291] Setting OutFile to fd 1 ...
I0719 04:22:04.083408  144649 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:04.083418  144649 out.go:304] Setting ErrFile to fd 2...
I0719 04:22:04.083422  144649 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:04.083630  144649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
I0719 04:22:04.084181  144649 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:04.084278  144649 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:04.084655  144649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:04.084705  144649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:04.099632  144649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39385
I0719 04:22:04.100143  144649 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:04.100675  144649 main.go:141] libmachine: Using API Version  1
I0719 04:22:04.100695  144649 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:04.101045  144649 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:04.101257  144649 main.go:141] libmachine: (functional-554179) Calling .GetState
I0719 04:22:04.102794  144649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:04.102831  144649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:04.118613  144649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
I0719 04:22:04.119043  144649 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:04.119573  144649 main.go:141] libmachine: Using API Version  1
I0719 04:22:04.119602  144649 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:04.119952  144649 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:04.120143  144649 main.go:141] libmachine: (functional-554179) Calling .DriverName
I0719 04:22:04.120337  144649 ssh_runner.go:195] Run: systemctl --version
I0719 04:22:04.120366  144649 main.go:141] libmachine: (functional-554179) Calling .GetSSHHostname
I0719 04:22:04.123423  144649 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:04.123832  144649 main.go:141] libmachine: (functional-554179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:18:8a", ip: ""} in network mk-functional-554179: {Iface:virbr1 ExpiryTime:2024-07-19 05:19:18 +0000 UTC Type:0 Mac:52:54:00:30:18:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-554179 Clientid:01:52:54:00:30:18:8a}
I0719 04:22:04.123866  144649 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined IP address 192.168.39.154 and MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:04.124011  144649 main.go:141] libmachine: (functional-554179) Calling .GetSSHPort
I0719 04:22:04.124194  144649 main.go:141] libmachine: (functional-554179) Calling .GetSSHKeyPath
I0719 04:22:04.124350  144649 main.go:141] libmachine: (functional-554179) Calling .GetSSHUsername
I0719 04:22:04.124532  144649 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/functional-554179/id_rsa Username:docker}
I0719 04:22:04.218722  144649 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 04:22:04.265262  144649 main.go:141] libmachine: Making call to close driver server
I0719 04:22:04.265280  144649 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:04.265585  144649 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:04.265610  144649 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 04:22:04.265621  144649 main.go:141] libmachine: Making call to close driver server
I0719 04:22:04.265629  144649 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:04.265633  144649 main.go:141] libmachine: (functional-554179) DBG | Closing plugin on server side
I0719 04:22:04.265832  144649 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:04.265857  144649 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554179 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/my-image                      | functional-554179  | 46c637c07790b | 1.47MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kicbase/echo-server           | functional-554179  | 9056ab77afb8e | 4.94MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-554179  | 9cfa5f5c06244 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554179 image ls --format table --alsologtostderr:
I0719 04:22:09.516479  144854 out.go:291] Setting OutFile to fd 1 ...
I0719 04:22:09.516596  144854 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:09.516605  144854 out.go:304] Setting ErrFile to fd 2...
I0719 04:22:09.516610  144854 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:09.516861  144854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
I0719 04:22:09.517488  144854 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:09.517601  144854 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:09.518050  144854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:09.518102  144854 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:09.532952  144854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34475
I0719 04:22:09.533426  144854 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:09.534093  144854 main.go:141] libmachine: Using API Version  1
I0719 04:22:09.534134  144854 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:09.534523  144854 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:09.534739  144854 main.go:141] libmachine: (functional-554179) Calling .GetState
I0719 04:22:09.536552  144854 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:09.536590  144854 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:09.551807  144854 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43029
I0719 04:22:09.552193  144854 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:09.552630  144854 main.go:141] libmachine: Using API Version  1
I0719 04:22:09.552651  144854 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:09.552970  144854 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:09.553202  144854 main.go:141] libmachine: (functional-554179) Calling .DriverName
I0719 04:22:09.553418  144854 ssh_runner.go:195] Run: systemctl --version
I0719 04:22:09.553443  144854 main.go:141] libmachine: (functional-554179) Calling .GetSSHHostname
I0719 04:22:09.556130  144854 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:09.556492  144854 main.go:141] libmachine: (functional-554179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:18:8a", ip: ""} in network mk-functional-554179: {Iface:virbr1 ExpiryTime:2024-07-19 05:19:18 +0000 UTC Type:0 Mac:52:54:00:30:18:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-554179 Clientid:01:52:54:00:30:18:8a}
I0719 04:22:09.556524  144854 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined IP address 192.168.39.154 and MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:09.556579  144854 main.go:141] libmachine: (functional-554179) Calling .GetSSHPort
I0719 04:22:09.556747  144854 main.go:141] libmachine: (functional-554179) Calling .GetSSHKeyPath
I0719 04:22:09.556888  144854 main.go:141] libmachine: (functional-554179) Calling .GetSSHUsername
I0719 04:22:09.557024  144854 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/functional-554179/id_rsa Username:docker}
I0719 04:22:09.630823  144854 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 04:22:09.664545  144854 main.go:141] libmachine: Making call to close driver server
I0719 04:22:09.664562  144854 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:09.664912  144854 main.go:141] libmachine: (functional-554179) DBG | Closing plugin on server side
I0719 04:22:09.664913  144854 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:09.664946  144854 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 04:22:09.664955  144854 main.go:141] libmachine: Making call to close driver server
I0719 04:22:09.664988  144854 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:09.665235  144854 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:09.665257  144854 main.go:141] libmachine: (functional-554179) DBG | Closing plugin on server side
I0719 04:22:09.665283  144854 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554179 image ls --format json --alsologtostderr:
[{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d
7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6dd5c9d679450be9064dd65cbb6d53e94b45db00bd8153e479ef060b06b4e76c","repoDigests":["docker.io/library/57e159ef07a1e6049fa68d83a3b8058a1558f9676d894e01c9e5076012592297-tmp@sha256:43c165fc6c6cf77b3c17a6ded2ee04f6f57f86c197502a81194a5592314a37ff"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed
5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"3
861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/
kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9cfa5f5c06244475d657f414a4870d836bb1d609f675fd310144a48ec9b98c97","repoDigests":["localhost/minikube-local-cache-test@sha256:ca9c0f36dcac80dbfaf4ba08dfc696abb8d7f6a921882a862134f751c95bbb3a"],"repoTags":["localhost/minikube-local-c
ache-test:functional-554179"],"size":"3330"},{"id":"46c637c07790bdbd5a14b80aa265a3292af9c7f6522df1b36b9343b05e46339c","repoDigests":["localhost/my-image@sha256:7de733fe32c829dbbf8a42f4a0d5f741aaae57cd55ccd91c0a31035f400c0154"],"repoTags":["localhost/my-image:functional-554179"],"size":"1468599"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"si
ze":"686139"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-554179"],"size":"4943877"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"1176
09954"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554179 image ls --format json --alsologtostderr:
I0719 04:22:09.320367  144830 out.go:291] Setting OutFile to fd 1 ...
I0719 04:22:09.320476  144830 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:09.320486  144830 out.go:304] Setting ErrFile to fd 2...
I0719 04:22:09.320490  144830 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:09.320672  144830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
I0719 04:22:09.321254  144830 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:09.321362  144830 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:09.321721  144830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:09.321777  144830 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:09.336398  144830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42909
I0719 04:22:09.336821  144830 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:09.337430  144830 main.go:141] libmachine: Using API Version  1
I0719 04:22:09.337458  144830 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:09.337741  144830 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:09.337936  144830 main.go:141] libmachine: (functional-554179) Calling .GetState
I0719 04:22:09.339593  144830 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:09.339631  144830 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:09.354177  144830 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37739
I0719 04:22:09.354534  144830 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:09.355032  144830 main.go:141] libmachine: Using API Version  1
I0719 04:22:09.355054  144830 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:09.355369  144830 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:09.355546  144830 main.go:141] libmachine: (functional-554179) Calling .DriverName
I0719 04:22:09.355742  144830 ssh_runner.go:195] Run: systemctl --version
I0719 04:22:09.355776  144830 main.go:141] libmachine: (functional-554179) Calling .GetSSHHostname
I0719 04:22:09.358288  144830 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:09.358637  144830 main.go:141] libmachine: (functional-554179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:18:8a", ip: ""} in network mk-functional-554179: {Iface:virbr1 ExpiryTime:2024-07-19 05:19:18 +0000 UTC Type:0 Mac:52:54:00:30:18:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-554179 Clientid:01:52:54:00:30:18:8a}
I0719 04:22:09.358660  144830 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined IP address 192.168.39.154 and MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:09.358804  144830 main.go:141] libmachine: (functional-554179) Calling .GetSSHPort
I0719 04:22:09.358945  144830 main.go:141] libmachine: (functional-554179) Calling .GetSSHKeyPath
I0719 04:22:09.359088  144830 main.go:141] libmachine: (functional-554179) Calling .GetSSHUsername
I0719 04:22:09.359215  144830 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/functional-554179/id_rsa Username:docker}
I0719 04:22:09.435605  144830 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 04:22:09.471879  144830 main.go:141] libmachine: Making call to close driver server
I0719 04:22:09.471892  144830 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:09.472191  144830 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:09.472212  144830 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 04:22:09.472232  144830 main.go:141] libmachine: Making call to close driver server
I0719 04:22:09.472224  144830 main.go:141] libmachine: (functional-554179) DBG | Closing plugin on server side
I0719 04:22:09.472242  144830 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:09.472457  144830 main.go:141] libmachine: (functional-554179) DBG | Closing plugin on server side
I0719 04:22:09.472478  144830 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:09.472516  144830 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554179 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-554179
size: "4943877"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 9cfa5f5c06244475d657f414a4870d836bb1d609f675fd310144a48ec9b98c97
repoDigests:
- localhost/minikube-local-cache-test@sha256:ca9c0f36dcac80dbfaf4ba08dfc696abb8d7f6a921882a862134f751c95bbb3a
repoTags:
- localhost/minikube-local-cache-test:functional-554179
size: "3330"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554179 image ls --format yaml --alsologtostderr:
I0719 04:22:04.315536  144672 out.go:291] Setting OutFile to fd 1 ...
I0719 04:22:04.315672  144672 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:04.315683  144672 out.go:304] Setting ErrFile to fd 2...
I0719 04:22:04.315689  144672 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:04.315897  144672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
I0719 04:22:04.316438  144672 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:04.316555  144672 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:04.316959  144672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:04.317022  144672 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:04.332071  144672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
I0719 04:22:04.332596  144672 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:04.333208  144672 main.go:141] libmachine: Using API Version  1
I0719 04:22:04.333231  144672 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:04.333582  144672 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:04.333782  144672 main.go:141] libmachine: (functional-554179) Calling .GetState
I0719 04:22:04.335726  144672 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:04.335782  144672 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:04.350747  144672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42941
I0719 04:22:04.351182  144672 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:04.351697  144672 main.go:141] libmachine: Using API Version  1
I0719 04:22:04.351721  144672 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:04.352024  144672 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:04.352230  144672 main.go:141] libmachine: (functional-554179) Calling .DriverName
I0719 04:22:04.352431  144672 ssh_runner.go:195] Run: systemctl --version
I0719 04:22:04.352454  144672 main.go:141] libmachine: (functional-554179) Calling .GetSSHHostname
I0719 04:22:04.355467  144672 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:04.355903  144672 main.go:141] libmachine: (functional-554179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:18:8a", ip: ""} in network mk-functional-554179: {Iface:virbr1 ExpiryTime:2024-07-19 05:19:18 +0000 UTC Type:0 Mac:52:54:00:30:18:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-554179 Clientid:01:52:54:00:30:18:8a}
I0719 04:22:04.355935  144672 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined IP address 192.168.39.154 and MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:04.356098  144672 main.go:141] libmachine: (functional-554179) Calling .GetSSHPort
I0719 04:22:04.356272  144672 main.go:141] libmachine: (functional-554179) Calling .GetSSHKeyPath
I0719 04:22:04.356414  144672 main.go:141] libmachine: (functional-554179) Calling .GetSSHUsername
I0719 04:22:04.356583  144672 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/functional-554179/id_rsa Username:docker}
I0719 04:22:04.467915  144672 ssh_runner.go:195] Run: sudo crictl images --output json
I0719 04:22:04.566545  144672 main.go:141] libmachine: Making call to close driver server
I0719 04:22:04.566564  144672 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:04.566888  144672 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:04.566909  144672 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 04:22:04.566927  144672 main.go:141] libmachine: Making call to close driver server
I0719 04:22:04.566936  144672 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:04.567188  144672 main.go:141] libmachine: (functional-554179) DBG | Closing plugin on server side
I0719 04:22:04.567224  144672 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:04.567237  144672 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 ssh pgrep buildkitd: exit status 1 (247.564619ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image build -t localhost/my-image:functional-554179 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 image build -t localhost/my-image:functional-554179 testdata/build --alsologtostderr: (4.245663382s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-554179 image build -t localhost/my-image:functional-554179 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6dd5c9d6794
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-554179
--> 46c637c0779
Successfully tagged localhost/my-image:functional-554179
46c637c07790bdbd5a14b80aa265a3292af9c7f6522df1b36b9343b05e46339c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-554179 image build -t localhost/my-image:functional-554179 testdata/build --alsologtostderr:
I0719 04:22:04.869796  144726 out.go:291] Setting OutFile to fd 1 ...
I0719 04:22:04.870122  144726 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:04.870133  144726 out.go:304] Setting ErrFile to fd 2...
I0719 04:22:04.870140  144726 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:22:04.870414  144726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
I0719 04:22:04.871198  144726 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:04.871913  144726 config.go:182] Loaded profile config "functional-554179": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0719 04:22:04.872473  144726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:04.872543  144726 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:04.887379  144726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35991
I0719 04:22:04.887962  144726 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:04.888561  144726 main.go:141] libmachine: Using API Version  1
I0719 04:22:04.888578  144726 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:04.888909  144726 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:04.889114  144726 main.go:141] libmachine: (functional-554179) Calling .GetState
I0719 04:22:04.890888  144726 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0719 04:22:04.890939  144726 main.go:141] libmachine: Launching plugin server for driver kvm2
I0719 04:22:04.905559  144726 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37757
I0719 04:22:04.905966  144726 main.go:141] libmachine: () Calling .GetVersion
I0719 04:22:04.906456  144726 main.go:141] libmachine: Using API Version  1
I0719 04:22:04.906498  144726 main.go:141] libmachine: () Calling .SetConfigRaw
I0719 04:22:04.906810  144726 main.go:141] libmachine: () Calling .GetMachineName
I0719 04:22:04.907050  144726 main.go:141] libmachine: (functional-554179) Calling .DriverName
I0719 04:22:04.907255  144726 ssh_runner.go:195] Run: systemctl --version
I0719 04:22:04.907295  144726 main.go:141] libmachine: (functional-554179) Calling .GetSSHHostname
I0719 04:22:04.909858  144726 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:04.910248  144726 main.go:141] libmachine: (functional-554179) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:18:8a", ip: ""} in network mk-functional-554179: {Iface:virbr1 ExpiryTime:2024-07-19 05:19:18 +0000 UTC Type:0 Mac:52:54:00:30:18:8a Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:functional-554179 Clientid:01:52:54:00:30:18:8a}
I0719 04:22:04.910284  144726 main.go:141] libmachine: (functional-554179) DBG | domain functional-554179 has defined IP address 192.168.39.154 and MAC address 52:54:00:30:18:8a in network mk-functional-554179
I0719 04:22:04.910377  144726 main.go:141] libmachine: (functional-554179) Calling .GetSSHPort
I0719 04:22:04.910558  144726 main.go:141] libmachine: (functional-554179) Calling .GetSSHKeyPath
I0719 04:22:04.910711  144726 main.go:141] libmachine: (functional-554179) Calling .GetSSHUsername
I0719 04:22:04.910857  144726 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/functional-554179/id_rsa Username:docker}
I0719 04:22:05.024956  144726 build_images.go:161] Building image from path: /tmp/build.2885436639.tar
I0719 04:22:05.025037  144726 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 04:22:05.042184  144726 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2885436639.tar
I0719 04:22:05.068772  144726 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2885436639.tar: stat -c "%s %y" /var/lib/minikube/build/build.2885436639.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2885436639.tar': No such file or directory
I0719 04:22:05.068816  144726 ssh_runner.go:362] scp /tmp/build.2885436639.tar --> /var/lib/minikube/build/build.2885436639.tar (3072 bytes)
I0719 04:22:05.106155  144726 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2885436639
I0719 04:22:05.126832  144726 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2885436639 -xf /var/lib/minikube/build/build.2885436639.tar
I0719 04:22:05.140853  144726 crio.go:315] Building image: /var/lib/minikube/build/build.2885436639
I0719 04:22:05.140926  144726 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-554179 /var/lib/minikube/build/build.2885436639 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0719 04:22:09.034937  144726 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-554179 /var/lib/minikube/build/build.2885436639 --cgroup-manager=cgroupfs: (3.893988109s)
I0719 04:22:09.035021  144726 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2885436639
I0719 04:22:09.046560  144726 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2885436639.tar
I0719 04:22:09.060324  144726 build_images.go:217] Built localhost/my-image:functional-554179 from /tmp/build.2885436639.tar
I0719 04:22:09.060358  144726 build_images.go:133] succeeded building to: functional-554179
I0719 04:22:09.060362  144726 build_images.go:134] failed building to: 
I0719 04:22:09.060389  144726 main.go:141] libmachine: Making call to close driver server
I0719 04:22:09.060401  144726 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:09.060674  144726 main.go:141] libmachine: (functional-554179) DBG | Closing plugin on server side
I0719 04:22:09.060699  144726 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:09.060717  144726 main.go:141] libmachine: Making call to close connection to plugin binary
I0719 04:22:09.060734  144726 main.go:141] libmachine: Making call to close driver server
I0719 04:22:09.060750  144726 main.go:141] libmachine: (functional-554179) Calling .Close
I0719 04:22:09.060984  144726 main.go:141] libmachine: Successfully made call to close driver server
I0719 04:22:09.060997  144726 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.754292113s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-554179
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image load --daemon docker.io/kicbase/echo-server:functional-554179 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 image load --daemon docker.io/kicbase/echo-server:functional-554179 --alsologtostderr: (1.363063744s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image load --daemon docker.io/kicbase/echo-server:functional-554179 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-554179
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image load --daemon docker.io/kicbase/echo-server:functional-554179 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image save docker.io/kicbase/echo-server:functional-554179 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-554179 image save docker.io/kicbase/echo-server:functional-554179 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.131976292s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image rm docker.io/kicbase/echo-server:functional-554179 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdspecific-port3220087817/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (187.019433ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdspecific-port3220087817/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 ssh "sudo umount -f /mount-9p": exit status 1 (217.899663ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-554179 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdspecific-port3220087817/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-554179
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 image save --daemon docker.io/kicbase/echo-server:functional-554179 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-554179
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3959736604/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3959736604/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3959736604/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T" /mount1: exit status 1 (249.104128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-554179 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-554179 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3959736604/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3959736604/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-554179 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3959736604/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-554179
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-554179
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-554179
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (213.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-925161 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-925161 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m32.378200551s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (213.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (57.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-925161 -- rollout status deployment/busybox: (5.764487608s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.2.3 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.2.3 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.2.3 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.2.3 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.2.3 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.2.3 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.2.3 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.2.2 10.244.2.3 10.244.0.4'\n\n-- /stdout --"
E0719 04:26:36.834862  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:36.840824  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:36.851097  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:36.871365  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:36.911655  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:36.991981  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:37.152431  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:37.473035  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:38.114018  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:39.394346  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:41.955347  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:47.076501  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:26:57.316961  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-5785p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-t2m4d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-xjdg9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-5785p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-t2m4d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-xjdg9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-5785p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-t2m4d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-xjdg9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (57.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-5785p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-5785p -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-t2m4d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-t2m4d -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-xjdg9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-925161 -- exec busybox-fc5497c4f-xjdg9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-925161 -v=7 --alsologtostderr
E0719 04:27:17.797215  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-925161 -v=7 --alsologtostderr: (54.016592511s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-925161 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp testdata/cp-test.txt ha-925161:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3159028946/001/cp-test_ha-925161.txt
E0719 04:27:58.757637  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161:/home/docker/cp-test.txt ha-925161-m02:/home/docker/cp-test_ha-925161_ha-925161-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m02 "sudo cat /home/docker/cp-test_ha-925161_ha-925161-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161:/home/docker/cp-test.txt ha-925161-m03:/home/docker/cp-test_ha-925161_ha-925161-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m03 "sudo cat /home/docker/cp-test_ha-925161_ha-925161-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161:/home/docker/cp-test.txt ha-925161-m04:/home/docker/cp-test_ha-925161_ha-925161-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m04 "sudo cat /home/docker/cp-test_ha-925161_ha-925161-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp testdata/cp-test.txt ha-925161-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3159028946/001/cp-test_ha-925161-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m02:/home/docker/cp-test.txt ha-925161:/home/docker/cp-test_ha-925161-m02_ha-925161.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161 "sudo cat /home/docker/cp-test_ha-925161-m02_ha-925161.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m02:/home/docker/cp-test.txt ha-925161-m03:/home/docker/cp-test_ha-925161-m02_ha-925161-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m03 "sudo cat /home/docker/cp-test_ha-925161-m02_ha-925161-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m02:/home/docker/cp-test.txt ha-925161-m04:/home/docker/cp-test_ha-925161-m02_ha-925161-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m04 "sudo cat /home/docker/cp-test_ha-925161-m02_ha-925161-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp testdata/cp-test.txt ha-925161-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3159028946/001/cp-test_ha-925161-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt ha-925161:/home/docker/cp-test_ha-925161-m03_ha-925161.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161 "sudo cat /home/docker/cp-test_ha-925161-m03_ha-925161.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt ha-925161-m02:/home/docker/cp-test_ha-925161-m03_ha-925161-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m02 "sudo cat /home/docker/cp-test_ha-925161-m03_ha-925161-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m03:/home/docker/cp-test.txt ha-925161-m04:/home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m04 "sudo cat /home/docker/cp-test_ha-925161-m03_ha-925161-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp testdata/cp-test.txt ha-925161-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3159028946/001/cp-test_ha-925161-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt ha-925161:/home/docker/cp-test_ha-925161-m04_ha-925161.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161 "sudo cat /home/docker/cp-test_ha-925161-m04_ha-925161.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt ha-925161-m02:/home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m02 "sudo cat /home/docker/cp-test_ha-925161-m04_ha-925161-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 cp ha-925161-m04:/home/docker/cp-test.txt ha-925161-m03:/home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 ssh -n ha-925161-m03 "sudo cat /home/docker/cp-test_ha-925161-m04_ha-925161-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.467895374s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-925161 node delete m03 -v=7 --alsologtostderr: (16.456538559s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (324.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-925161 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 04:41:36.836208  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
E0719 04:42:59.879923  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-925161 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m23.860521048s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (324.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-925161 --control-plane -v=7 --alsologtostderr
E0719 04:46:36.835667  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-925161 --control-plane -v=7 --alsologtostderr: (1m17.342711781s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-925161 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.7s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-360753 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-360753 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.697940556s)
--- PASS: TestJSONOutput/start/Command (58.70s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-360753 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-360753 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.64s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-360753 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-360753 --output=json --user=testUser: (6.641317979s)
--- PASS: TestJSONOutput/stop/Command (6.64s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-664506 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-664506 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.589193ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"577431bd-d714-4081-bf64-4b9f5edde297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-664506] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f194180f-7c64-4b8b-bd5a-ced5aa639fed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"14c74f4c-9db9-4d79-b65c-75c076df56c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"769d61c9-4c3b-472a-931f-2a0e57654639","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig"}}
	{"specversion":"1.0","id":"8059b45d-b0d5-495c-aec3-d1425fdba70a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube"}}
	{"specversion":"1.0","id":"d5a0d823-ab4f-44da-94a1-567add8dad5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e5a948cf-0daa-4372-ab77-16be41c4bcfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5fe5a5b8-47b3-41ad-8e32-f4e78ce15860","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-664506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-664506
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (81.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-335344 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-335344 --driver=kvm2  --container-runtime=crio: (39.642822455s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-339424 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-339424 --driver=kvm2  --container-runtime=crio: (39.209450415s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-335344
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-339424
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-339424" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-339424
helpers_test.go:175: Cleaning up "first-335344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-335344
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-335344: (1.013061157s)
--- PASS: TestMinikubeProfile (81.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-625768 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-625768 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.601636811s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-625768 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-625768 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-639767 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-639767 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.267148898s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-639767 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-639767 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-625768 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-639767 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-639767 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-639767
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-639767: (1.268475378s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.9s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-639767
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-639767: (18.899435694s)
--- PASS: TestMountStart/serial/RestartStopped (19.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-639767 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-639767 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (114.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-270078 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0719 04:51:36.835522  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-270078 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.087986749s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (114.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-270078 -- rollout status deployment/busybox: (3.844310342s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-hnr7x -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-qzrf4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-hnr7x -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-qzrf4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-hnr7x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-qzrf4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-hnr7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-hnr7x -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-qzrf4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-270078 -- exec busybox-fc5497c4f-qzrf4 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-270078 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-270078 -v 3 --alsologtostderr: (45.723214595s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.29s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-270078 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp testdata/cp-test.txt multinode-270078:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp multinode-270078:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3247087681/001/cp-test_multinode-270078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp multinode-270078:/home/docker/cp-test.txt multinode-270078-m02:/home/docker/cp-test_multinode-270078_multinode-270078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m02 "sudo cat /home/docker/cp-test_multinode-270078_multinode-270078-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp multinode-270078:/home/docker/cp-test.txt multinode-270078-m03:/home/docker/cp-test_multinode-270078_multinode-270078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m03 "sudo cat /home/docker/cp-test_multinode-270078_multinode-270078-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp testdata/cp-test.txt multinode-270078-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp multinode-270078-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3247087681/001/cp-test_multinode-270078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp multinode-270078-m02:/home/docker/cp-test.txt multinode-270078:/home/docker/cp-test_multinode-270078-m02_multinode-270078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078 "sudo cat /home/docker/cp-test_multinode-270078-m02_multinode-270078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp multinode-270078-m02:/home/docker/cp-test.txt multinode-270078-m03:/home/docker/cp-test_multinode-270078-m02_multinode-270078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m03 "sudo cat /home/docker/cp-test_multinode-270078-m02_multinode-270078-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp testdata/cp-test.txt multinode-270078-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3247087681/001/cp-test_multinode-270078-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt multinode-270078:/home/docker/cp-test_multinode-270078-m03_multinode-270078.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078 "sudo cat /home/docker/cp-test_multinode-270078-m03_multinode-270078.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 cp multinode-270078-m03:/home/docker/cp-test.txt multinode-270078-m02:/home/docker/cp-test_multinode-270078-m03_multinode-270078-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 ssh -n multinode-270078-m02 "sudo cat /home/docker/cp-test_multinode-270078-m03_multinode-270078-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-270078 node stop m03: (1.386214061s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-270078 status: exit status 7 (437.828619ms)

                                                
                                                
-- stdout --
	multinode-270078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-270078-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-270078-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-270078 status --alsologtostderr: exit status 7 (427.987956ms)

                                                
                                                
-- stdout --
	multinode-270078
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-270078-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-270078-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 04:53:51.478495  162542 out.go:291] Setting OutFile to fd 1 ...
	I0719 04:53:51.478588  162542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:53:51.478595  162542 out.go:304] Setting ErrFile to fd 2...
	I0719 04:53:51.478599  162542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:53:51.478772  162542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19302-122995/.minikube/bin
	I0719 04:53:51.478921  162542 out.go:298] Setting JSON to false
	I0719 04:53:51.478945  162542 mustload.go:65] Loading cluster: multinode-270078
	I0719 04:53:51.479056  162542 notify.go:220] Checking for updates...
	I0719 04:53:51.479366  162542 config.go:182] Loaded profile config "multinode-270078": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0719 04:53:51.479384  162542 status.go:255] checking status of multinode-270078 ...
	I0719 04:53:51.479825  162542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:53:51.479889  162542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:53:51.498036  162542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
	I0719 04:53:51.498480  162542 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:53:51.499147  162542 main.go:141] libmachine: Using API Version  1
	I0719 04:53:51.499189  162542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:53:51.499525  162542 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:53:51.499717  162542 main.go:141] libmachine: (multinode-270078) Calling .GetState
	I0719 04:53:51.501351  162542 status.go:330] multinode-270078 host status = "Running" (err=<nil>)
	I0719 04:53:51.501371  162542 host.go:66] Checking if "multinode-270078" exists ...
	I0719 04:53:51.501642  162542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:53:51.501674  162542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:53:51.516532  162542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34185
	I0719 04:53:51.516957  162542 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:53:51.517423  162542 main.go:141] libmachine: Using API Version  1
	I0719 04:53:51.517445  162542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:53:51.517803  162542 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:53:51.517971  162542 main.go:141] libmachine: (multinode-270078) Calling .GetIP
	I0719 04:53:51.520819  162542 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:53:51.521289  162542 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:53:51.521322  162542 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:53:51.521422  162542 host.go:66] Checking if "multinode-270078" exists ...
	I0719 04:53:51.521713  162542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:53:51.521757  162542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:53:51.537662  162542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I0719 04:53:51.538110  162542 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:53:51.538517  162542 main.go:141] libmachine: Using API Version  1
	I0719 04:53:51.538537  162542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:53:51.538912  162542 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:53:51.539113  162542 main.go:141] libmachine: (multinode-270078) Calling .DriverName
	I0719 04:53:51.539373  162542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:53:51.539408  162542 main.go:141] libmachine: (multinode-270078) Calling .GetSSHHostname
	I0719 04:53:51.542503  162542 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:53:51.543059  162542 main.go:141] libmachine: (multinode-270078) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:0b:92", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:51:08 +0000 UTC Type:0 Mac:52:54:00:16:0b:92 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:multinode-270078 Clientid:01:52:54:00:16:0b:92}
	I0719 04:53:51.543092  162542 main.go:141] libmachine: (multinode-270078) DBG | domain multinode-270078 has defined IP address 192.168.39.17 and MAC address 52:54:00:16:0b:92 in network mk-multinode-270078
	I0719 04:53:51.543278  162542 main.go:141] libmachine: (multinode-270078) Calling .GetSSHPort
	I0719 04:53:51.543518  162542 main.go:141] libmachine: (multinode-270078) Calling .GetSSHKeyPath
	I0719 04:53:51.543713  162542 main.go:141] libmachine: (multinode-270078) Calling .GetSSHUsername
	I0719 04:53:51.543915  162542 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078/id_rsa Username:docker}
	I0719 04:53:51.631884  162542 ssh_runner.go:195] Run: systemctl --version
	I0719 04:53:51.637821  162542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:53:51.653781  162542 kubeconfig.go:125] found "multinode-270078" server: "https://192.168.39.17:8443"
	I0719 04:53:51.653810  162542 api_server.go:166] Checking apiserver status ...
	I0719 04:53:51.653844  162542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:53:51.666714  162542 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1149/cgroup
	W0719 04:53:51.675670  162542 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1149/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 04:53:51.675742  162542 ssh_runner.go:195] Run: ls
	I0719 04:53:51.679631  162542 api_server.go:253] Checking apiserver healthz at https://192.168.39.17:8443/healthz ...
	I0719 04:53:51.685390  162542 api_server.go:279] https://192.168.39.17:8443/healthz returned 200:
	ok
	I0719 04:53:51.685417  162542 status.go:422] multinode-270078 apiserver status = Running (err=<nil>)
	I0719 04:53:51.685445  162542 status.go:257] multinode-270078 status: &{Name:multinode-270078 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:53:51.685491  162542 status.go:255] checking status of multinode-270078-m02 ...
	I0719 04:53:51.685907  162542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:53:51.685952  162542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:53:51.702237  162542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39495
	I0719 04:53:51.702718  162542 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:53:51.703275  162542 main.go:141] libmachine: Using API Version  1
	I0719 04:53:51.703299  162542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:53:51.703653  162542 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:53:51.703864  162542 main.go:141] libmachine: (multinode-270078-m02) Calling .GetState
	I0719 04:53:51.706950  162542 status.go:330] multinode-270078-m02 host status = "Running" (err=<nil>)
	I0719 04:53:51.706973  162542 host.go:66] Checking if "multinode-270078-m02" exists ...
	I0719 04:53:51.707251  162542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:53:51.707288  162542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:53:51.722975  162542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36617
	I0719 04:53:51.723425  162542 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:53:51.723953  162542 main.go:141] libmachine: Using API Version  1
	I0719 04:53:51.723975  162542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:53:51.724257  162542 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:53:51.724451  162542 main.go:141] libmachine: (multinode-270078-m02) Calling .GetIP
	I0719 04:53:51.727262  162542 main.go:141] libmachine: (multinode-270078-m02) DBG | domain multinode-270078-m02 has defined MAC address 52:54:00:83:cf:1b in network mk-multinode-270078
	I0719 04:53:51.727689  162542 main.go:141] libmachine: (multinode-270078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:cf:1b", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:52:14 +0000 UTC Type:0 Mac:52:54:00:83:cf:1b Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-270078-m02 Clientid:01:52:54:00:83:cf:1b}
	I0719 04:53:51.727721  162542 main.go:141] libmachine: (multinode-270078-m02) DBG | domain multinode-270078-m02 has defined IP address 192.168.39.199 and MAC address 52:54:00:83:cf:1b in network mk-multinode-270078
	I0719 04:53:51.727835  162542 host.go:66] Checking if "multinode-270078-m02" exists ...
	I0719 04:53:51.728247  162542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:53:51.728294  162542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:53:51.743917  162542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0719 04:53:51.744383  162542 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:53:51.744937  162542 main.go:141] libmachine: Using API Version  1
	I0719 04:53:51.744962  162542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:53:51.745308  162542 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:53:51.745530  162542 main.go:141] libmachine: (multinode-270078-m02) Calling .DriverName
	I0719 04:53:51.745706  162542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 04:53:51.745731  162542 main.go:141] libmachine: (multinode-270078-m02) Calling .GetSSHHostname
	I0719 04:53:51.748508  162542 main.go:141] libmachine: (multinode-270078-m02) DBG | domain multinode-270078-m02 has defined MAC address 52:54:00:83:cf:1b in network mk-multinode-270078
	I0719 04:53:51.748905  162542 main.go:141] libmachine: (multinode-270078-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:cf:1b", ip: ""} in network mk-multinode-270078: {Iface:virbr1 ExpiryTime:2024-07-19 05:52:14 +0000 UTC Type:0 Mac:52:54:00:83:cf:1b Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:multinode-270078-m02 Clientid:01:52:54:00:83:cf:1b}
	I0719 04:53:51.748933  162542 main.go:141] libmachine: (multinode-270078-m02) DBG | domain multinode-270078-m02 has defined IP address 192.168.39.199 and MAC address 52:54:00:83:cf:1b in network mk-multinode-270078
	I0719 04:53:51.749166  162542 main.go:141] libmachine: (multinode-270078-m02) Calling .GetSSHPort
	I0719 04:53:51.749362  162542 main.go:141] libmachine: (multinode-270078-m02) Calling .GetSSHKeyPath
	I0719 04:53:51.749521  162542 main.go:141] libmachine: (multinode-270078-m02) Calling .GetSSHUsername
	I0719 04:53:51.749639  162542 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19302-122995/.minikube/machines/multinode-270078-m02/id_rsa Username:docker}
	I0719 04:53:51.828665  162542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:53:51.843262  162542 status.go:257] multinode-270078-m02 status: &{Name:multinode-270078-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0719 04:53:51.843301  162542 status.go:255] checking status of multinode-270078-m03 ...
	I0719 04:53:51.843645  162542 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0719 04:53:51.843693  162542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0719 04:53:51.859576  162542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43597
	I0719 04:53:51.859979  162542 main.go:141] libmachine: () Calling .GetVersion
	I0719 04:53:51.860447  162542 main.go:141] libmachine: Using API Version  1
	I0719 04:53:51.860470  162542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0719 04:53:51.860790  162542 main.go:141] libmachine: () Calling .GetMachineName
	I0719 04:53:51.860978  162542 main.go:141] libmachine: (multinode-270078-m03) Calling .GetState
	I0719 04:53:51.862823  162542 status.go:330] multinode-270078-m03 host status = "Stopped" (err=<nil>)
	I0719 04:53:51.862841  162542 status.go:343] host is not running, skipping remaining checks
	I0719 04:53:51.862850  162542 status.go:257] multinode-270078-m03 status: &{Name:multinode-270078-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-270078 node start m03 -v=7 --alsologtostderr: (38.539374227s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-270078 node delete m03: (1.590680688s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (187.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-270078 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-270078 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m6.617529451s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-270078 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (187.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-270078
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-270078-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-270078-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.921085ms)

                                                
                                                
-- stdout --
	* [multinode-270078-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-270078-m02' is duplicated with machine name 'multinode-270078-m02' in profile 'multinode-270078'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-270078-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-270078-m03 --driver=kvm2  --container-runtime=crio: (39.792005092s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-270078
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-270078: exit status 80 (210.495809ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-270078 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-270078-m03 already exists in multinode-270078-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-270078-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.87s)

                                                
                                    
x
+
TestScheduledStopUnix (111.35s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-916613 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-916613 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.786710542s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-916613 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-916613 -n scheduled-stop-916613
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-916613 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-916613 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-916613 -n scheduled-stop-916613
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-916613
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-916613 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-916613
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-916613: exit status 7 (62.250137ms)

                                                
                                                
-- stdout --
	scheduled-stop-916613
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-916613 -n scheduled-stop-916613
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-916613 -n scheduled-stop-916613: exit status 7 (60.545034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-916613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-916613
--- PASS: TestScheduledStopUnix (111.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (223.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3936299766 start -p running-upgrade-601565 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0719 05:11:36.835488  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3936299766 start -p running-upgrade-601565 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m19.085946866s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-601565 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-601565 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.881444907s)
helpers_test.go:175: Cleaning up "running-upgrade-601565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-601565
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-601565: (1.202363531s)
--- PASS: TestRunningBinaryUpgrade (223.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561425 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-561425 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.577642ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-561425] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19302-122995/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19302-122995/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561425 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561425 --driver=kvm2  --container-runtime=crio: (1m34.188837314s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-561425 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.43s)

                                                
                                    
x
+
TestPause/serial/Start (95.91s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-994122 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-994122 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m35.909801071s)
--- PASS: TestPause/serial/Start (95.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561425 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561425 --no-kubernetes --driver=kvm2  --container-runtime=crio: (41.618440329s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-561425 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-561425 status -o json: exit status 2 (257.186264ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-561425","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-561425
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-561425: (1.144695909s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561425 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561425 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.550748792s)
--- PASS: TestNoKubernetes/serial/Start (26.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.8s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-994122 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-994122 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.776309697s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (61.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-561425 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-561425 "sudo systemctl is-active --quiet service kubelet": exit status 1 (200.433015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-561425
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-561425: (1.268917759s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (24.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-561425 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-561425 --driver=kvm2  --container-runtime=crio: (24.96024806s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (24.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-561425 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-561425 "sudo systemctl is-active --quiet service kubelet": exit status 1 (191.108496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-994122 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-994122 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-994122 --output=json --layout=cluster: exit status 2 (266.085401ms)

                                                
                                                
-- stdout --
	{"Name":"pause-994122","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-994122","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-994122 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.97s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-994122 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.97s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.63s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-994122 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-994122 --alsologtostderr -v=5: (1.631094309s)
--- PASS: TestPause/serial/DeletePaused (1.63s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (142.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.641112689 start -p stopped-upgrade-215036 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.641112689 start -p stopped-upgrade-215036 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m22.997059265s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.641112689 -p stopped-upgrade-215036 stop
E0719 05:16:36.834942  130170 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19302-122995/.minikube/profiles/functional-554179/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.641112689 -p stopped-upgrade-215036 stop: (2.177541654s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-215036 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-215036 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.254075601s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (142.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-215036
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    

Test skip (35/221)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
167 TestImageBuild 0
194 TestKicCustomNetwork 0
195 TestKicExistingNetwork 0
196 TestKicCustomSubnet 0
197 TestKicStaticIP 0
229 TestChangeNoneUser 0
232 TestScheduledStopWindows 0
234 TestSkaffold 0
236 TestInsufficientStorage 0
240 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard